text
stringlengths 11
320k
| source
stringlengths 26
161
|
|---|---|
The calendar below provides information on the course’s lectures (L), recitation (R) and project presentation (P) sessions. SES # TOPICS ASSIGNMENTS L1 Administrative & Introduction Makefile Primer GNU Makefile Documentation CVS Documentation PS 0 Out R1 Course Goals & Content, References & Recitations; Compilation; Debugging; Makefiles; Concurrent Versions System (CVS); Introduction to C++; Data Types; Variable Declarations and Definitions; Operators; Expressions and Statements; Input/Output Operators; Preprocessor Directives; Header Files; Control Structures L2 Overview of C++ and Object Oriented Design PS 0 In PS 1 Out L3 Classes and Objects R2 Functions: Declarations, Definitions, and Invocations; Inline Functions; Function Overloading; Recursion; Scope and Extent of Variables; References; Pointers; Function Call by Value, References and Pointers; Pointers to Functions; 1-D Arrays; Strings as Arrays of Char; Arrays of Pointers; 2-D and Higher Dimensions Arrays; Return by Reference Functions; Dynamic Memory Allocation; The Size of Operator; Data Structures; Introduction to Classes and Objects L4 Dynamic Management of Objects PS 1 In PS 2 Out L5 Operator Overloading R3 Classes and Objects; Classes: Member Variables & Member Functions; Classes: Constructors & Destructor; Constructor Header Initialization; Copy Constructors; Member Variables & Functions Protection: Private, Protected & Public; Static Class Data and Class Functions; Class Scope; Pointers to Class Members; Operator Overloading; Friend Functions; Type Conversions L6 Inheritance PS 2 In PS 3 Out L7 Linked Lists, Static Class Members R4 Inheritance: Public, Protected and Private Derivation; Multiple Inheritance; Inheritance: Constructors and Destructors; Inheritance: Redefining Member Functions; Virtual Functions and Polymorphism; Abstract Classes; File Streams; Namespaces; Assertions; C++ Standard Library String Class; Other Topics L8 Quiz Review PS 3 In Quiz I: C++ PS 4 Out R5 Function Templates; Class Templates; Sorting and Searching Algorithms; Insertion Sort; Selection Sort; Shellsort; Quicksort; Linear Search; Binary Search L9 Templates, Sorting & Searching Algorithms R6 Introduction to Java; Compiling and Running a Java Application and a Java Applet; Data Types; Variables, Declarations, Initializations, Assignments; Operators, Precedence, Associativity, Type Conversions, and Mixed Expressions; Control Structures; Comments; Arrays; Classes and Objects; Constructors; Initializers; Member Data and Functions; Function Overloading L10 Programming in Java® Shape Example PS 4 In PS 5 Out L11 Java® Basics (contd.) R7 Sun Java Studio Standard 5; Inheritance; Controlling Access to Class Members; Strings; Packages; Interfaces; Nested Classes and Interfaces; Garbage Collection; Applets L12 Graphical Programs PS 5 In PS 6 Out L13 Applets and Applications R8 Exceptions; Threads; I/O; Introduction to Java GUI and Swing L14 Custom Graphics L15 File I/O PS 6 In PS 7 Out R9 The JComponent Class; Top-Level Containers; Intermediate Swing Containers; Atomic Components L16 Quiz Review Project Proposal Due Quiz II: Sorting, Searching and Java® L17 Multithreading Working with Images L18 Physical Simulations PS 7 In L19 Source Code Management Using CVS L20 Java® Remote Method Invokation Framework L21 Java Beans, Java® 3D P1 Project Presentation I P2 Project Presentation II P3 Project Presentation III
|
common_crawl_ocw.mit.edu_48
|
Programs that Function as Applets and as Applications
The following example shows how you might write a Java® program, so that it can function either as an applet or as an application. The program can run as an applet because it inherits JApplet and it can run as an application because it has a main routine. The code creates the UI components within a JPanel and then sets this panel to be the content pane for a JApplet or a JFrame. When the program runs as an applet, the JApplet class is used as the top level container. When the program runs as an application, we create a JFrame and use this as the top level container.
MyApp.java
import javax.swing.*;
import java.awt.*;
import java.awt.event.*;
import javax.swing.event.*;
// This class can be run either as an Applet or as an Application.
public class MyApp extends JApplet {
// The RootPaneContainer interface is implemented by JApplet and by JFrame.
// It specifies methods like setContentPane() and getContentPane(). The
// content pane is of type java.awt.Container or one of its subclasses.
RootPaneContainer mRPC;
// This contructor is used when we run as an applet.
public MyApp() {
mRPC = this;
}
// This contructor is used when we run as an application.
public MyApp(JFrame frame) {
mRPC = frame;
}
// The init method is the place to put the code to initialize the applet. The code to set up the
// user interface usually goes here. We avoid putting applet initialization code in applet constructors
// because an applet is not guaranteed to have a full environment until the init method is called.
public void init() {
// We will put all our components in a JPanel and then set this panel
// as the content pane for the applet or application.
JPanel panel = new JPanel();
panel.setLayout(new BorderLayout());
JSlider slider = new JSlider(0,50,0);
panel.add(slider, BorderLayout.SOUTH);
final DrawingArea drawingArea = new DrawingArea();
panel.add(drawingArea, BorderLayout.CENTER);
slider.addChangeListener(new ChangeListener() {
public void stateChanged(ChangeEvent e) {
JSlider source = (JSlider)e.getSource();
if (!source.getValueIsAdjusting()) {
int offset = (int)source.getValue();
drawingArea.setOffset(offset);
drawingArea.repaint();
}
}
});
mRPC.setContentPane(panel);
}
// The start method is the place to start the execution of the applet.
// For example, this is where you would tell an animation to start running.
public void start() {
}
// The stop method is the place to stop the execution of the applet.
// This is where you would tell an animation to stop running.
public void stop() {
}
// The destroy method is where you would do any final cleanup that needs to be done. The
// destroy method is rarely required, since most of the cleanup can usually be done in stop().
public void destroy() {
}
public static void main(String[] args) {
JFrame frame = new JFrame();
final MyApp app = new MyApp(frame);
frame.addWindowListener(new WindowAdapter() {
public void windowClosing(WindowEvent e) {
app.stop();
app.destroy();
System.exit(0);
}
});
app.init();
frame.setSize(400, 400);
frame.setVisible(true);
app.start();
}
}
// A user interface component, which is to be added to the applet.
class DrawingArea extends JPanel {
private int mOffset;
public DrawingArea() {
setBackground(Color.white);
}
public void setOffset(int offset) {
mOffset = offset;
}
public void paintComponent(Graphics g) {
super.paintComponent(g);
g.setFont(new Font(“Helvetica”, Font.PLAIN, 24));
g.setColor(Color.green);
g.drawString(“An Applet or an Application?”, 10+mOffset, 50);
g.drawString(“That is the question.”, 10+mOffset, 100);
}
}
mypage.html
<HTML>
<APPLET CODE=MyApp.class WIDTH=400 HEIGHT=400>
</APPLET>
|
common_crawl_ocw.mit.edu_49
|
Topics
1. Custom Painting
(Ref. Java® Tutorial)
So far, we have seen user interface components that display static content. The individual components posessed sufficient knowledge to draw themselves and so we did not have to do anything special beyond creating the components and describing their layout. If a component is obscured by some other window and then uncovered again, it is the job of the window system to make sure that the component is properly redrawn.
There are instances, however, where we will want to change the appearance of a component e.g. we may wish to draw a graph, display an image or even display an animation within the component. This requires the use of custom painting code. The recommended way to implement custom painting is to extend the JPanel class. We will need to be concerned with two methods:
- The paintComponent() method specifies what the component should draw. We can override this method to draw text, graphics, etc. The paintComponent() method should never be called directly. It will be called indirectly, either because the window system thinks that the component should draw itself or because we have issued a call to repaint().
- The repaint() method forces the screen to update as soon as possible. It results in a call to the paintComponent() method. repaint() behaves asynchronously i.e. it returns immediately without waiting for the paintComponent() method to complete.
The following code illustrates how custom painting works. A JPanel subclass is used to listen to mouse events and then display a message at the location where the mouse is pressed or released.
import javax.swing.*;
import java.awt.*;
import java.awt.event.*;
class Main {
public static void main(String[] args0) {
JFrame frame = new JFrame();
frame.setSize(400, 400);
DrawingArea drawingArea = new DrawingArea();
frame.getContentPane().add(drawingArea);
frame.addWindowListener(new WindowAdapter() {
public void windowClosing(WindowEvent e) {
System.exit(0);
}
});
frame.setVisible(true);
}
}
class DrawingArea extends JPanel {
private String mText;
private static String mStr1 = “The mouse was pressed here!”;
private static String mStr2 = “The mouse was released here!”;
private int miX, miY;
// The constructor simply registers the drawing area to receive mouse events from itself.
public DrawingArea() {
addMouseListener(new MouseAdapter() {
public void mousePressed(MouseEvent e) {
miX = e.getX();
miY = e.getY();
mText = mStr1;
repaint();
}
public void mouseReleased(MouseEvent e) {
miX = e.getX();
miY = e.getY();
mText = mStr2;
repaint();
}
});
}
// The paint method. This gets called in response to repaint().
public void paintComponent(Graphics g) {
super.paintComponent(g); // This paints the background.
if (mText != null)
g.drawString(mText, miX, miY);
}
}
Note that prior to the introduction of the Swing package, one would override the paint() method to implement custom painting. In Swing applications, however, we override the paintComponent() method instead. The paintComponent() method will be called by the paint() method in class JComponent. The JComponent’s paint() method also implements features such as double buffering, which are useful in animation.
2. Simple 2D Graphics
The paintComponent() method gives us a graphics context, which is an instance of a Graphics subclass. A graphics context bundles information such as the area into which we can draw, the font and color to be used, the clipping region, etc. Note that we do not instantiate the graphics context in our program; in fact the Graphics class itself is an abstract class. The Graphics class provides methods for drawing simple graphics primitives, like lines, rectangles, ovals, arcs and polygons. It also provides methods for drawing text, as we saw above.
This program illustrates how to draw some basic shapes.
import javax.swing.*;
import java.awt.*;
import java.awt.event.*;
class Main {
public static void main(String[] args0) {
JFrame frame = new JFrame();
frame.setSize(400, 400);
DrawingArea drawingArea = new DrawingArea();
frame.getContentPane().add(drawingArea);
frame.addWindowListener(new WindowAdapter() {
public void windowClosing(WindowEvent e) {
System.exit(0);
}
});
frame.setVisible(true);
}
}
class DrawingArea extends JPanel {
public void paintComponent(Graphics g) {
super.paintComponent(g);
// Draw some simple geometric primitives.
g.setColor(Color.red);
g.drawLine(10, 10, 40, 50); // x1, y1, x2, y2
g.setColor(Color.green);
g.drawRect(100, 100, 40, 30); // x, y, width, height
g.setColor(Color.yellow);
g.drawOval(100, 200, 30, 50); // x, y, width, height
g.setColor(Color.blue);
g.drawArc(200, 200, 50, 30, 45, 90); // x, y, width, height, start angle, arc angle
int x1_points[] = {100, 130, 140, 115, 90};
int y1_points[] = {300, 300, 340, 370, 340};
g.setColor(Color.black);
g.drawPolygon(x1_points, y1_points, x1_points.length); // x array, y array, length
int x2_points[] = {300, 330, 340, 315, 290};
int y2_points[] = {300, 300, 340, 370, 340};
g.setColor(Color.cyan);
g.drawPolyline(x2_points, y2_points, x2_points.length); // x array, y array, length
g.setColor(Color.orange);
g.fillRect(300, 100, 40, 30); // x, y, width, height
g.setColor(Color.magenta);
g.fill3DRect(300, 200, 40, 30, true); // x, y, width, height, raised
}
}
The Java® 2D API provides a range of advanced capabilities, such as stroking and filling, affine transformations, compositing and transparency.
3. A Graphics Example
Here is a complete program that allows you to interactively define points, lines and polygon using mouse input. This program can be run either as an application or as an applet.
// This is a Java graphics example that can be run either as an applet or as an application.
// Created by Kevin Amaratunga 10/17/1997. Converted to Swing 10/17/1999.
import javax.swing.*;
import java.awt.*;
import java.awt.event.*;
import java.util.*;
// In order to run as an applet, class Geometry must be declared as a public class. Note that there
// cannot be more than one public class in a .java file. Also, the public class must have the same
// same name as the .java file.
public class Geometry extends JApplet {
JTextArea mTextArea;
DrawingArea mDrawingArea;
public Geometry() {
// Get the applet’s container.
Container c = getContentPane();
// Choose a layout manager. BorderLayout is a straightforward one to use.
c.setLayout(new BorderLayout());
// Create a drawing area and add it to the center of the applet.
mDrawingArea = new DrawingArea(this);
c.add(“Center”, mDrawingArea);
// Create a read only text area to be used for displaying
// information. Add it to the bottom of the applet.
mTextArea = new JTextArea();
JScrollPane scrollPane = new JScrollPane(mTextArea);
scrollPane.setPreferredSize(new Dimension(600, 100));
mTextArea.setEditable(false);
c.add(“South”, scrollPane);
}
public JTextArea getTextArea() {
return mTextArea;
}
public static void main(String args[]) {
// Create the applet object.
Geometry geomApplet = new Geometry();
// Create a frame. Then set its size and title.
JFrame frame = new JFrame();
frame.setSize(600, 600);
frame.setTitle(geomApplet.getClass().getName());
// Make the frame closable.
WindowListener listener = new WindowAdapter() {
// An anonymous class that extends WindowAdapter.
public void windowClosing(WindowEvent e) {
System.out.println(“Window closing”);
System.exit(0);
}
};
frame.addWindowListener(listener);
// Add the applet to the center of the frame.
frame.getContentPane().add(“Center”, geomApplet);
// Initialize the applet.
geomApplet.init();
// Make the frame visible.
frame.setVisible(true);
// Start the applet.
geomApplet.start();
}
}
// The drawing area is the area within all the objects will be drawn.
class DrawingArea extends JPanel implements MouseListener {
// Parent and child widgets.
Geometry mGeomApplet; // The parent applet.
JPopupMenu mPopupMenu; // Popup menu for creating new objects.
// Object lists.
Vector mPointList; // List of all Point objects.
Vector mLineList; // List of all Line objects.
Vector mPolygonList; // List of all Polygon objects.
// Constants that indicate which kind of object (if any) is currently being created.
static final int NO_OBJECT = 0;
static final int POINT_OBJECT = 1;
static final int LINE_OBJECT = 2;
static final int POLYGON_OBJECT = 3;
// Miscellaneous state variables.
int miLastButton = 0; // Last button for which an event was received.
int miAcceptingInput = 0; // Type of object (if any) that we are currently creating.
int miPointsEntered = 0; // Number of points entered for this object so far.
Object mCurrentObject = null; // The object that we are currently creating.
// DrawingArea constructor.
DrawingArea(Geometry geomApplet) {
JMenuItem menuItem;
mGeomApplet = geomApplet;
// Set the background color.
setBackground(Color.white);
// Register the drawing area to start listening to mouse events.
addMouseListener(this);
// Create a popup menu and make it a child of the drawing area, but don’t show it just yet.
mPopupMenu = new JPopupMenu(“New Object”);
menuItem = new JMenuItem(“Point”);
menuItem.addActionListener(new PointActionListener(this));
mPopupMenu.add(menuItem);
menuItem = new JMenuItem(“Line”);
menuItem.addActionListener(new LineActionListener(this));
mPopupMenu.add(menuItem);
menuItem = new JMenuItem(“Polygon”);
menuItem.addActionListener(new PolygonActionListener(this));
mPopupMenu.add(menuItem);
add(mPopupMenu);
// Create the object lists with a reasonable initial capacity.
mPointList = new Vector(10);
mLineList = new Vector(10);
mPolygonList = new Vector(10);
}
// The paint method.
public void paintComponent(Graphics g) {
int i;
// Paint the background.
super.paintComponent(g);
// Draw all objects that are stored in the object lists.
for (i = 0; i < mPointList.size(); i++) {
Point point = (Point)mPointList.elementAt(i);
g.fillRect(point.x-1, point.y-1, 3, 3);
}
for (i = 0; i < mLineList.size(); i++) {
Line line = (Line)mLineList.elementAt(i);
line.draw(g);
}
for (i = 0; i < mPolygonList.size(); i++) {
Polygon polygon = (Polygon)mPolygonList.elementAt(i);
int j;
g.setColor(Color.red);
g.drawPolygon(polygon);
g.setColor(Color.black);
for (j = 0; j < polygon.npoints; j++) {
g.fillRect(polygon.xpoints[j], polygon.ypoints[j], 3, 3);
}
}
// Draw as much of the current object as available.
switch (miAcceptingInput) {
case LINE_OBJECT:
Line line = (Line)mCurrentObject;
if (line.mb1 && !line.mb2)
g.fillRect(line.mEnd1.x-1, line.mEnd1.y-1, 3, 3);
break;
case POLYGON_OBJECT:
Polygon polygon = (Polygon)mCurrentObject;
int j;
g.setColor(Color.red);
g.drawPolyline(polygon.xpoints, polygon.ypoints, polygon.npoints);
g.setColor(Color.black);
for (j = 0; j < polygon.npoints; j++) {
g.fillRect(polygon.xpoints[j], polygon.ypoints[j], 3, 3);
}
break;
default:
break;
}
// Draw some text at the top of the drawing area.
int w = getSize().width;
int h = getSize().height;
g.drawRect(0, 0, w - 1, h - 1);
g.setFont(new Font(“Helvetica”, Font.PLAIN, 15));
g.drawString(“Drawing area”, (w - g.getFontMetrics().stringWidth(“Drawing area”))/2, 10);
}
// The next five methods are required, since we implement the
// MouseListener interface. We are only interested in mouse pressed
// events.
public void mousePressed(MouseEvent e) {
int iX = e.getX(); // The x and y coordinates of the
int iY = e.getY(); // mouse event.
int iModifier = e.getModifiers();
if ((iModifier & InputEvent.BUTTON1_MASK) != 0) {
miLastButton = 1;
// If we are currently accepting input for a new object,
// then add the current point to the object.
if (miAcceptingInput != NO_OBJECT)
addPointToObject(iX, iY);
}
else if ((iModifier & InputEvent.BUTTON2_MASK) != 0) {
miLastButton = 2;
}
else if ((iModifier & InputEvent.BUTTON3_MASK) != 0) {
miLastButton = 3;
if (miAcceptingInput == NO_OBJECT) {
// Display the popup menu provided we are not accepting
// any input for a new object.
mPopupMenu.show(this, iX, iY);
}
else if (miAcceptingInput == POLYGON_OBJECT) {
// If current object is a polygon, finish it.
mPolygonList.addElement(mCurrentObject);
String str = “Finished creating polygon object.\n”;
mGeomApplet.getTextArea().append(str);
mGeomApplet.repaint();
miAcceptingInput = NO_OBJECT;
miPointsEntered = 0;
mCurrentObject = null;
}
}
}
public void mouseClicked(MouseEvent e) {}
public void mouseEntered(MouseEvent e) {}
public void mouseExited(MouseEvent e) {}
public void mouseReleased(MouseEvent e) {}
public void getPointInput() {
miAcceptingInput = POINT_OBJECT;
mCurrentObject = (Object)new Point();
mGeomApplet.getTextArea().append(“New point object: enter point.\n”);
}
public void getLineInput() {
miAcceptingInput = LINE_OBJECT;
mCurrentObject = (Object)new Line();
mGeomApplet.getTextArea().append(“New line: enter end points.\n”);
}
public void getPolygonInput() {
miAcceptingInput = POLYGON_OBJECT;
mCurrentObject = (Object)new Polygon();
mGeomApplet.getTextArea().append(“New polygon: enter vertices “);
mGeomApplet.getTextArea().append("(click right button to finish).\n”);
}
void addPointToObject(int iX, int iY) {
String str;
miPointsEntered++;
switch (miAcceptingInput) {
case POINT_OBJECT:
str = “Point at (” + iX + “,” + iY + “)\n”;
mGeomApplet.getTextArea().append(str);
Point point = (Point)mCurrentObject;
point.x = iX;
point.y = iY;
mPointList.addElement(mCurrentObject);
str = “Finished creating point object.\n”;
mGeomApplet.getTextArea().append(str);
mGeomApplet.repaint();
miAcceptingInput = NO_OBJECT;
miPointsEntered = 0;
mCurrentObject = null;
break;
case LINE_OBJECT:
if (miPointsEntered <= 2) {
str = “End " + miPointsEntered + " at (” + iX + “,” + iY + “)”;
str += “\n”;
mGeomApplet.getTextArea().append(str);
}
Line line = (Line)mCurrentObject;
if (miPointsEntered == 1) {
line.setEnd1(iX, iY);
mGeomApplet.repaint();
}
else {
if (miPointsEntered == 2) {
line.setEnd2(iX, iY);
mLineList.addElement(mCurrentObject);
str = “Finished creating line object.\n”;
mGeomApplet.getTextArea().append(str);
mGeomApplet.repaint();
}
miAcceptingInput = NO_OBJECT;
miPointsEntered = 0;
mCurrentObject = null;
}
break;
case POLYGON_OBJECT:
str = “Vertex " + miPointsEntered + " at (” + iX + “,” + iY + “)”;
str += “\n”;
mGeomApplet.getTextArea().append(str);
Polygon polygon = (Polygon)mCurrentObject;
polygon.addPoint(iX, iY);
mGeomApplet.repaint();
break;
default:
break;
} // End switch.
}
}
// Action listener to create a new Point object.
class PointActionListener implements ActionListener {
DrawingArea mDrawingArea;
PointActionListener(DrawingArea drawingArea) {
mDrawingArea = drawingArea;
}
public void actionPerformed(ActionEvent e) {
mDrawingArea.getPointInput();
}
}
// Action listener to create a new Line object.
class LineActionListener implements ActionListener {
DrawingArea mDrawingArea;
LineActionListener(DrawingArea drawingArea) {
mDrawingArea = drawingArea;
}
public void actionPerformed(ActionEvent e) {
mDrawingArea.getLineInput();
}
}
// Action listener to create a new Polygon object.
class PolygonActionListener implements ActionListener {
DrawingArea mDrawingArea;
PolygonActionListener(DrawingArea drawingArea) {
mDrawingArea = drawingArea;
}
public void actionPerformed(ActionEvent e) {
mDrawingArea.getPolygonInput();
}
}
// A line class.
class Line {
Point mEnd1, mEnd2;
boolean mb1, mb2;
Line() {
mb1 = mb2 = false;
mEnd1 = new Point();
mEnd2 = new Point();
}
void setEnd1(int iX, int iY) {
mEnd1.x = iX;
mEnd1.y = iY;
mb1 = true;
}
void setEnd2(int iX, int iY) {
mEnd2.x = iX;
mEnd2.y = iY;
mb2 = true;
}
void draw(Graphics g) {
g.fillRect(mEnd1.x-1, mEnd1.y-1, 3, 3);
g.fillRect(mEnd2.x-1, mEnd2.y-1, 3, 3);
g.setColor(Color.green);
g.drawLine(mEnd1.x, mEnd1.y, mEnd2.x, mEnd2.y);
g.setColor(Color.black);
}
}
|
common_crawl_ocw.mit.edu_50
|
Contents
- Creating and Destroying Objects - Constructors and Destructors
- The new and delete Operators
- Scope and the Lifetime of Objects
- Data Structures for Managing Objects
1. Creating and Destroying Objects - Constructors and Destructors
(Ref. Lippman 14.1-14.3)
Let’s take a closer look at how constructors and destructors work.
A Point Class
Here is a complete example of a Point class. We have organized the code into three separate files:
point.h contains the declaration of the class, which describes the structure of a Point object.
point.C contains the definition of the class i.e. the actual implementation of the methods.
point_test.C is a program that uses the Point class.
Our Point class has three constructors and one destructor.
Point(); // The default constructor.
Point(float fX, float fY); // A constructor that takes two floats.
Point(const Point& p); // The copy constructor.
~Point(); // The destructor.
These constructors can be respectively invoked by object definitions such as
Point a;
Point b(1.0, 2.0);
Point c(b);
The default constructor, Point(), is so named because it can be invoked without any arguments. In our example, the default constructor initializes the Point to (0,0). The second constructor creates a Point from a pair of coordinates of type float. Note that we could combine these two constructors into a single constructor which has default arguments:
Point(float fX=0.0, float fY=0.0);
The third constructor is known as a copy constructor since it creates one Point from another. The object that we want to clone is passed in as a constant reference. Note that we cannot pass by value in this instance because doing so would lead to an unterminated recursive call to the copy constructor. In this example, the destructor does not have to perform any clean-up operations. Later on, we will see examples where the destructor has to release dynamically allocated memory.
Constructors and destructors can be triggered more often than you may imagine. For example, each time a Point is passed to a function by value, a local copy of the object is created. Likewise, each time a Point is returned by value, a temporary copy of the object is created in the calling program. In both cases, we will see an extra call to the copy constructor, and an extra call to the destructor. You are encouraged to put print statements in every constructor and in the destructor, and then carefully observe what happens.
point.h
// Declaration of class Point.
#ifndef _POINT_H_
#define _POINT_H_
#include <iostream.h>
class Point {
// The state of a Point object. Property variables are typically
// set up as private data members, which are read from and
// written to via public access methods.
private:
float mfX;
float mfY;
// The behavior of a Point object.
public:
Point(); // The default constructor.
Point(float fX, float fY); // A constructor that takes two floats.
Point(const Point& p); // The copy constructor.
~Point(); // The destructor.
void print() { // This function will be made inline by default.
cout << “(” << mfX << “,” << mfY << “)” << endl;
}
void set_x(float fX);
float get_x();
void set_y(float fX);
float get_y();
};
#endif // _POINT_H_
point.C
// Definition class Point.
#include “point.h”
// A constructor which creates a Point object at (0,0).
Point::Point() {
cout << “In constructor Point::Point()” << endl;
mfX = 0.0;
mfY = 0.0;
}
// A constructor which creates a Point object from two
// floats.
Point::Point(float fX, float fY) {
cout << “In constructor Point::Point(float fX, float fY)” << endl;
mfX = fX;
mfY = fY;
}
// A constructor which creates a Point object from
// another Point object.
Point::Point(const Point& p) {
cout << “In constructor Point::Point(const Point& p)” << endl;
mfX = p.mfX;
mfY = p.mfY;
}
// The destructor.
Point::~Point() {
cout << “In destructor Point::~Point()” << endl;
}
// Modifier for x coordinate.
void Point::set_x(float fX) {
mfX = fX;
}
// Accessor for x coordinate.
float Point::get_x() {
return mfX;
}
// Modifier for y coordinate.
void Point::set_y(float fY) {
mfY = fY;
}
// Accessor for y coordinate.
float Point::get_y() {
return mfY;
}
point_test.C
// Test program for the Point class.
#include “point.h”
int main() {
Point a;
Point b(1.0, 2.0);
Point c(b);
// Print out the current state of all objects.
a.print();
b.print();
c.print();
b.set_x(3.0);
b.set_y(4.0);
// Print out the current state of b.
cout << endl;
b.print();
return 0;
}
2. The new and delete Operators
(Ref. Lippman 4.9, 8.4)
Until now, we have only considered situations in which the exact number of objects to be created is known at compile time. This is rarely the case in real world software. A web-browser cannot predict in advance how many image objects it will find on a web page. What is needed, therefore, is a way to dynamically create and destroy objects at run time. C++ provides two operators for this purpose:
The new operator allows us to allocate memory for one or more objects. It is similar to the malloc() function in the C standard library.
The delete operator allows us to release memory that has previously been allocated using new. It is similar to the free() function in the C standard library. Note that it is an error to apply the delete operator to memory allocated by any means other than new.
We can allocate single objects using statements such as
a = new Point();
b = new Point(2.0, 3.0);
Object arrays can be allocated using statements such as
c = new Point[num_points];
In either case, new returns the starting address of the memory it has allocated, so a, b, and c must be defined as pointer types, Point *. A single object can be released using a statement such as
delete a;
When releasing memory associated with an array, it is important to remember to use the following notation:
delete[] c;
If the square brackets are omitted, only the first object in the array will be released, and the memory associated with the rest of the objects will be leaked.
nd_test.C
// Test program for the new and delete operators.
#include “point.h”
int main() {
int num_points;
Point *a, *b, *c;
float d;
// Allocate a single Point object in heap memory. This invokes the default constructor.
a = new Point();
// This invokes a constructor that has two arguments.
b = new Point(2.0, 3.0);
// Print out the two point objects.
cout << “Here are the two Point objects I have created:” << endl;
a->print();
b->print();
// Destroy the two Point objects.
delete a;
delete b;
// Now allocate an array of Point objects in heap memory.
cout << “I will now create an array of Points. How big shall I make it? “;
cin » num_points;
c = new Point[num_points];
for (int i = 0; i < num_points; i++) {
d = (float)i;
c[i].set_x(d);
c[i].set_y(d + 1.0);
}
// Print out the array of point objects.
cout << “Here is the array I have created:” << endl;
for (int i = 0; i < num_points; i++) {
c[i].print();
}
// Destroy the array of Point objects.
delete[] c; // What happens if [] is omitted?
return 0;
3. Scope and the Lifetime of Objects
(Ref. Lippman 8.1-8.4)
There are three fundamental ways of using memory in C and C++.
- Static memory. This is memory allocated by the linker for the duration of the program. Global variables and objects explicitly defined as static fall into this category.
- Automatic memory. Objects that are allocated in automatic memory are destroyed automatically when they go out of scope. Examples are local variables and function arguments. Objects that reside in automatic memory are said to be allocated on the stack.
- Dynamic memory. Memory allocated using the new operator (or malloc()) falls into this category. Dynamic memory must be explicitly released using the delete operator (or free(), as appropriate.) Objects that reside in dynamic memory are said to be allocated on the heap.
A garbage collector is a memory manager that automatically identifies unreferenced objects in dynamic memory and then reclaims that memory. The C and C++ standards do not require the implementation of automatic garbage collection, however, garbage collectors are sometimes implemented in large scale projects where it can be difficult to keep track of memory explicitly.
The following program illustrates various uses of memory. Note that the static object in the function foo() is only allocated once, even though foo() is invoked multiple times.
sl_test.C
// Test program for scope and the lifetime of objects.
#include “point.h”
Point a(1.0, 2.0); // Resides in static memory.
void foo() {
static Point a; // Resides in static memory.
a.set_x(a.get_x() + 1.0);
a.print();
}
int main() {
Point a(4.0, 3.0); // Resides in automatic memory.
a.print();
::a.print();
for (int i = 0; i < 3; i++)
foo();
Point *b = new Point(5.0, 6.0); // Resides in heap memory.
b->print();
delete b;
// Curly braces serve as scope delimiters.
{
Point a(7.0, 9.0); // Resides in automatic memory.
a.print();
::a.print();
}
return 0;
}
Here is the output from the program:
In constructor Point::Point(float fX, float fY) <– Global object a.
In constructor Point::Point(float fX, float fY) <– Local object a.
(4,3)
(1,2)
In constructor Point::Point() <– Object a in foo().
(1,0)
(2,0)
(3,0)
In constructor Point::Point(float fX, float fY) <– Object *b.
(5,6)
In destructor Point::~Point() <– Object *b.
In constructor Point::Point(float fX, float fY) <– Second local object a.
(7,9)
(1,2)
In destructor Point::~Point() <– Second local object a.
In destructor Point::~Point() <– Local object a.
In destructor Point::~Point() <– Object a in foo().
In destructor Point::~Point() <– Global object a.
4. Data Structures for Managing Objects
We have already seen an example of how to dynamically create an array of objects. This may not be the best approach for managing a collection of objects that is constantly changing, since we may wish to delete a single object while retaining the rest. Instead, we might consider using an array of pointers to hold individually allocated objects, as illustrated in the following example. Even this approach has its limitations since we need to know how big to make the pointer array. In general, a linked list is the data structure of choice, since it makes no assumptions about the maximum number of objects to be stored. We will see an example of a linked list later.
pa_test.C
// Pointer array test program.
#include “point.h”
int main() {
int i, max_points;
Point **a;
max_points = 5;
// Create an array of pointers to Point objects. We will use the
// array elements to hold on to dynamically allocated Point objects.
a = new Point *[max_points];
// Now create some point objects and store them in the array.
for (i = 0; i < max_points; i++)
a[i] = new Point(i, i);
// Let’s suppose we want to eliminate the middle Point.
i = (max_points-1) / 2;
delete a[i];
a[i] = NULL;
// Print out the remaining Points.
for (i = 0; i < max_points; i++) {
if (a[i])
a[i]->print();
}
// Delete the remaining Points. Note that it is acceptable to pass a NULL
// pointer to the delete operator.
for (i = 0; i < max_points; i++)
delete a[i];
// Now delete the array of pointers.
delete[] a;
return 0;
}
|
common_crawl_ocw.mit.edu_51
|
Topics
1. Introduction
Java® uses a stream-based approach to input and output. A stream in this context is a flow of data, which could either be read in from a data source (e.g. file, keyboard or socket) or written to a data sink (e.g file, screen, or socket). Java® currently supports two types of streams:
- 8-bit streams. These are intended for binary data i.e. data that will be manipulated at the byte level. The abstract base classes for 8-bit streams are InputStream and OutputStream.
- 16-bit streams. These are intended for character data. 16-bits streams are required becuase Java®’s internal representation for characters is the 16-bit Unicode format rather than the 8-bit ASCII format. The abstract base classes for 16-bit streams are Reader and Writer.
It is possible to create a 16-bit Reader from an 8-bit InputStream using the InputStreamReader class e.g.
Reader r = new InputStreamReader(System.in); // System.in is an example of an InputStream.
Likewise, it is possible to create a 16-bit Writer from an 8-bit OutputStream using the OutputStreamWriter class e.g.
Writer w = new OutputStreamWriter(System.out); // System.out is an example of an OutputStream.
2. Text Input
The FileReader class is used to read characters from a file. This class can only read one 16-bit Unicode character at a time (characters that are stored in 8-bit ASCII will be automatically promoted to Unicode.) In order to read a full line of text at once, we must layer a BufferedReader on top of the FileReader. Next, the individual words in the line of text can be extracted using a StringTokenizer. If the text contains numbers, we must also perform String to Number conversion operations, like Integer.parseInt() and Double.parseDouble().
import java.io.*;
import java.util.*;
public class Main {
public static void main(String[] args) {
try {
readText(args[0]);
}
catch (IOException e) {
e.printStackTrace();
}
}
// This function will read data from an ASCII text file.
public static void readText(String fileName) throws IOException {
// First create a FileReader. A Reader is a 16-bit input stream,
// which is intended for all forms of character (text) input.
Reader reader = new FileReader(fileName);
// Now create a BufferedReader from the Reader. This allows us to
// read in an entire line at a time.
BufferedReader bufferedReader = new BufferedReader(reader);
String nextLine;
while ((nextLine = bufferedReader.readLine()) != null) {
// Next, we create a StringTokenizer from the line we have just
// read in. This permits the extraction of nonspace characters.
StringTokenizer tokenizer = new StringTokenizer(nextLine);
// We can now extract various data types as follows.
String companyName = tokenizer.nextToken();
int numberShares = Integer.parseInt(tokenizer.nextToken());
double sharePrice = Double.parseDouble(tokenizer.nextToken());
// Print the data out on the screen.
System.out.print(companyName + " has " + numberShares);
System.out.println(" million shares valued at $" + sharePrice);
// Close the file.
bufferedReader.close();
}
}
}
This program can be easily converted to read in data from the keyboard. Simply replace
Reader reader = new FileReader(fileName);
with
Reader = new InputStreamReader(System.in);
3. Text Output
The FileWriter class is used to write text to a file. This class is only capable of writing out individual characters and strings. We can layer a PrintWriter on top of the FileWriter, so that we can write out numbers as well.
import java.io.*;
import java.util.*;
import java.text.*;
public class Main {
public static void main(String[] args) {
try {
writeText(args[0]);
}
catch (IOException e) {
e.printStackTrace();
}
}
// This function will write data to an ASCII text file.
public static void writeText(String fileName) throws IOException {
// First create a FileWriter. A Writer is a 16-bit output stream,
// which is intended for all forms of character (text) output.
Writer writer = new FileWriter(fileName);
// Next create a PrintWriter from the Writer. This allows us to
// print out other data types besides characters and Strings.
PrintWriter printWriter = new PrintWriter(writer);
// Now print out various data types.
boolean b = true;
int i = 20;
double d = 1.124;
String str = “This is some text.”;
printWriter.print(b);
printWriter.print(i);
printWriter.print(d);
printWriter.println("\n" + str);
// This is an example of formatted output. In the format string,
// 0 and # represent digits. # means that the digit should not
// be displayed if it is 0.
DecimalFormat df = new DecimalFormat("#.000");
printWriter.println(df.format(200.0)); // 200.000
printWriter.println(df.format(0.123)); // .123
// This will flush the PrintWriter’s internal buffer, causing the
// data to be actually written to file.
printWriter.flush();
// Finally, close the file.
printWriter.close();
}
}
4. Binary Input and Output
Binary input and output is done using the 8-bit streams. To read binary data from a file, we create a FileInputStream and then layer a DataInputStream on top of it. To write binary data to a file, we create a FileOutputStream and then layer a DataOutputStream on top of it. The following example illustrates this.
import java.io.*;
public class Main {
public static void main(String[] args) {
try {
writeBinary(args[0]);
readBinary(args[0]);
}
catch (IOException e) {
e.printStackTrace();
}
}
// This function will write binary data to a file.
public static void writeBinary(String fileName) throws IOException {
// First create a FileOutputStream.
OutputStream outputStream = new FileOutputStream(fileName);
// Now layer a DataOutputStream on top of it.
DataOutputStream dataOutputStream = new DataOutputStream(outputStream);
// Now write out some data in binary format. Strings are written out
// in UTF format, which is a bridge between ASCII and Unicode.
int i = 5;
double d = 1.124;
char c = ‘z’;
String str = “Some text”;
dataOutputStream.writeInt(i); // Increases file size by 4 bytes.
dataOutputStream.writeDouble(d); // Increases file size by 8 bytes.
dataOutputStream.writeChar(c); // Increases file size by 2 bytes.
dataOutputStream.writeUTF(str); // Increases file size by 2+9 bytes.
// Close the file.
dataOutputStream.close();
}
// This function will read binary data from a file.
public static void readBinary(String fileName) throws IOException {
// First create a FileInputStream.
InputStream inputStream = new FileInputStream(fileName);
// Now layer a DataInputStream on top of it.
DataInputStream dataInputStream = new DataInputStream(inputStream);
// Now read in data from the binary file.
int i;
double d;
char c;
String str;
i = dataInputStream.readInt();
d = dataInputStream.readDouble();
c = dataInputStream.readChar();
str = dataInputStream.readUTF();
System.out.print(“integer " + i + " double " + d);
System.out.println(” char " + c + " String " + str);
// Close the file.
dataInputStream.close();
}
}
|
common_crawl_ocw.mit.edu_52
|
Table of Contents
- Overview of make
- An Introduction to Makefiles
- Writing Makefiles
-
Writing Rules
- Rule Syntax
- Using Wildcard Characters in File Names
- Searching Directories for Prerequisites
- Phony Targets
- Rules without Commands or Prerequisites
- Empty Target Files to Record Events
- Special Built-in Target Names
- Multiple Targets in a Rule
- Multiple Rules for One Target
- Static Pattern Rules
- Double-Colon Rules
- Generating Prerequisites Automatically
- Writing the Commands in Rules
-
How to Use Variables
- Basics of Variable References
- The Two Flavors of Variables
- Advanced Features for Reference to Variables
- How Variables Get Their Values
- Setting Variables
- Appending More Text to Variables
- The override Directive
- Defining Variables Verbatim
- Variables from the Environment
- Target-specific Variable Values
- Pattern-specific Variable Values
- Conditional Parts of Makefiles
- Functions for Transforming Text
- How to Run make
- Using Implicit Rules
- Using make to Update Archive Files
- Features of GNU make
- Incompatibilities and Missing Features
- Makefile Conventions
- Quick Reference
- Errors Generated by Make
- Complex Makefile Example
- Index of Concepts
- Index of Functions, Variables, & Directives
Overview of make ----------------
The make utility automatically determines which pieces of a large program need to be recompiled, and issues commands to recompile them. This manual describes GNU make, which was implemented by Richard Stallman and Roland McGrath. Development since Version 3.76 has been handled by Paul D. Smith.
GNU make conforms to section 6.2 of IEEE Standard 1003.2-1992 (POSIX.2).
Our examples show C programs, since they are most common, but you can use make with any programming language whose compiler can be run with a shell command. Indeed, make is not limited to programs. You can use it to describe any task where some files must be updated automatically from others whenever the others change.
To prepare to use make, you must write a file called the makefile that describes the relationships among files in your program and provides commands for updating each file. In a program, typically, the executable file is updated from object files, which are in turn made by compiling source files.
Once a suitable makefile exists, each time you change some source files, this simple shell command:
make
suffices to perform all necessary recompilations. The make program uses the makefile data base and the last-modification times of the files to decide which of the files need to be updated. For each of those files, it issues the commands recorded in the data base.
You can provide command line arguments to make to control which files should be recompiled, or how. See section How to Run make.
How to Read This Manual
If you are new to make, or are looking for a general introduction, read the first few sections of each chapter, skipping the later sections. In each chapter, the first few sections contain introductory or general information and the later sections contain specialized or technical information. The exception is section An Introduction to Makefiles, all of which is introductory.
If you are familiar with other make programs, see section Features of GNU make, which lists the enhancements GNU make has, and section Incompatibilities and Missing Features, which explains the few things GNU make lacks that others have.
For a quick summary, see section Summary of Options, section Quick Reference, and section Special Built-in Target Names.
Problems and Bugs
If you have problems with GNU make or think you’ve found a bug, please report it to the developers; we cannot promise to do anything but we might well want to fix it.
Before reporting a bug, make sure you’ve actually found a real bug. Carefully reread the documentation and see if it really says you can do what you’re trying to do. If it’s not clear whether you should be able to do something or not, report that too; it’s a bug in the documentation!
Before reporting a bug or trying to fix it yourself, try to isolate it to the smallest possible makefile that reproduces the problem. Then send us the makefile and the exact results make gave you. Also say what you expected to occur; this will help us decide whether the problem was really in the documentation.
Once you’ve got a precise problem, please send electronic mail to:
Please include the version number of make you are using. You can get this information with the command make --version
. Be sure also to include the type of machine and operating system you are using. If possible, include the contents of the file config.h
that is generated by the configuration process.
An Introduction to Makefiles
You need a file called a makefile to tell make what to do. Most often, the makefile tells make how to compile and link a program.
In this chapter, we will discuss a simple makefile that describes how to compile and link a text editor which consists of eight C source files and three header files. The makefile can also tell make how to run miscellaneous commands when explicitly asked (for example, to remove certain files as a clean-up operation). To see a more complex example of a makefile, see section Complex Makefile Example.
When make recompiles the editor, each changed C source file must be recompiled. If a header file has changed, each C source file that includes the header file must be recompiled to be safe. Each compilation produces an object file corresponding to the source file. Finally, if any source file has been recompiled, all the object files, whether newly made or saved from previous compilations, must be linked together to produce the new executable editor.
What a Rule Looks Like
A simple makefile consists of “rules” with the following shape:
target … : prerequisites … command … …
A target is usually the name of a file that is generated by a program; examples of targets are executable or object files. A target can also be the name of an action to carry out, such as clean
(see section Phony Targets).
A prerequisite is a file that is used as input to create the target. A target often depends on several files.
A command is an action that make carries out. A rule may have more than one command, each on its own line. Please note: you need to put a tab character at the beginning of every command line! This is an obscurity that catches the unwary.
Usually a command is in a rule with prerequisites and serves to create a target file if any of the prerequisites change. However, the rule that specifies commands for the target need not have prerequisites. For example, the rule containing the delete command associated with the target clean
does not have prerequisites.
A rule, then, explains how and when to remake certain files which are the targets of the particular rule. make carries out the commands on the prerequisites to create or update the target. A rule can also explain how and when to carry out an action. See section Writing Rules.
A makefile may contain other text besides rules, but a simple makefile need only contain rules. Rules may look somewhat more complicated than shown in this template, but all fit the pattern more or less.
A Simple Makefile
Here is a straightforward makefile that describes the way an executable file called edit depends on eight object files which, in turn, depend on eight C source and three header files.
In this example, all the C files include defs.h
, but only those defining editing commands include command.h
, and only low level files that change the editor buffer include buffer.h
.
edit : main.o kbd.o command.o display.o \ insert.o search.o files.o utils.o cc -o edit main.o kbd.o command.o display.o \ insert.o search.o files.o utils.omain.o : main.c defs.h cc -c main.ckbd.o : kbd.c defs.h command.h cc -c kbd.c command.o : command.c defs.h command.h cc -c command.c display.o : display.c defs.h buffer.h cc -c display.c insert.o : insert.c defs.h buffer.h cc -c insert.c search.o : search.c defs.h buffer.h cc -c search.c files.o : files.c defs.h buffer.h command.h cc -c files.c utils.o : utils.c defs.h cc -c utils.cclean : rm edit main.o kbd.o command.o display.o \ insert.o search.o files.o utils.o
We split each long line into two lines using backslash-newline; this is like using one long line, but is easier to read.
To use this makefile to create the executable file called edit
, type:
make
To use this makefile to delete the executable file and all the object files from the directory, type:
make clean
In the example makefile, the targets include the executable file edit
, and the object files main.o
and kbd.o
. The prerequisites are files such as main.c
and defs.h
. In fact, each .o
file is both a target and a prerequisite. Commands include cc -c main.c
and cc -c kbd.c
.
When a target is a file, it needs to be recompiled or relinked if any of its prerequisites change. In addition, any prerequisites that are themselves automatically generated should be updated first. In this example, edit
depends on each of the eight object files; the object file main.o
depends on the source file main.c
and on the header file defs.h
.
A shell command follows each line that contains a target and prerequisites. These shell commands say how to update the target file. A tab character must come at the beginning of every command line to distinguish commands lines from other lines in the makefile. (Bear in mind that make does not know anything about how the commands work. It is up to you to supply commands that will update the target file properly. All make does is execute the commands in the rule you have specified when the target file needs to be updated.)
The target clean
is not a file, but merely the name of an action. Since you normally do not want to carry out the actions in this rule, clean
is not a prerequisite of any other rule. Consequently, make never does anything with it unless you tell it specifically. Note that this rule not only is not a prerequisite, it also does not have any prerequisites, so the only purpose of the rule is to run the specified commands. Targets that do not refer to files but are just actions are called phony targets. See section Phony Targets, for information about this kind of target. See section Errors in Commands, to see how to cause make to ignore errors from rm or any other command.
How make Processes a Makefile
By default, make starts with the first target (not targets whose names start with .
). This is called the default goal. (Goals are the targets that make strives ultimately to update. See section Arguments to Specify the Goals.)
In the simple example of the previous section, the default goal is to update the executable program edit
; therefore, we put that rule first.
Thus, when you give the command:
make
make reads the makefile in the current directory and begins by processing the first rule. In the example, this rule is for relinking edit
; but before make can fully process this rule, it must process the rules for the files that edit
depends on, which in this case are the object files. Each of these files is processed according to its own rule. These rules say to update each .o
file by compiling its source file. The recompilation must be done if the source file, or any of the header files named as prerequisites, is more recent than the object file, or if the object file does not exist.
The other rules are processed because their targets appear as prerequisites of the goal. If some other rule is not depended on by the goal (or anything it depends on, etc.), that rule is not processed, unless you tell make to do so (with a command such as make clean).
Before recompiling an object file, make considers updating its prerequisites, the source file and header files. This makefile does not specify anything to be done for them–the .c
and .h
files are not the targets of any rules–so make does nothing for these files. But make would update automatically generated C programs, such as those made by Bison or Yacc, by their own rules at this time.
After recompiling whichever object files need it, make decides whether to relink edit
. This must be done if the file edit
does not exist, or if any of the object files are newer than it. If an object file was just recompiled, it is now newer than edit
, so edit
is relinked.
Thus, if we change the file insert.c
and run make, make will compile that file to update insert.o
, and then link edit
. If we change the file command.h
and run make, make will recompile the object files kbd.o
, command.o
and files.o
and then link the file edit
.
In our example, we had to list all the object files twice in the rule for edit
(repeated here):
edit : main.o kbd.o command.o display.o \ insert.o search.o files.o utils.o cc -o edit main.o kbd.o command.o display.o \ insert.o search.o files.o utils.o
Such duplication is error-prone; if a new object file is added to the system, we might add it to one list and forget the other. We can eliminate the risk and simplify the makefile by using a variable. Variables allow a text string to be defined once and substituted in multiple places later (see section How to Use Variables).
It is standard practice for every makefile to have a variable named objects, OBJECTS, objs, OBJS, obj, or OBJ which is a list of all object file names. We would define such a variable objects with a line like this in the makefile:
objects = main.o kbd.o command.o display.o \ insert.o search.o files.o utils.o
Then, each place we want to put a list of the object file names, we can substitute the variable’s value by writing $(objects)
(see section How to Use Variables).
Here is how the complete simple makefile looks when you use a variable for the object files:
objects = main.o kbd.o command.o display.o \ insert.o search.o files.o utils.oedit : $(objects) cc -o edit $(objects)main.o : main.c defs.h cc -c main.c kbd.o : kbd.c defs.h command.h cc -c kbd.c command.o : command.c defs.h command.h cc -c command.c display.o : display.c defs.h buffer.h cc -c display.c insert.o : insert.c defs.h buffer.h cc -c insert.c search.o : search.c defs.h buffer.h cc -c search.c files.o : files.c defs.h buffer.h command.h cc -c files.c utils.o : utils.c defs.h cc -c utils.cclean : rm edit $(objects)
Letting make Deduce the CommandsIt is not necessary to spell out the commands for compiling the individual C source files, because make can figure them out: it has an implicit rule for updating a .o
file from a correspondingly named .c
file using a cc -c
command. For example, it will use the command cc -c main.c -o main.o
to compile main.c
into main.o
. We can therefore omit the commands from the rules for the object files. See section Using Implicit Rules.
When a .c
file is used automatically in this way, it is also automatically added to the list of prerequisites. We can therefore omit the .c
files from the prerequisites, provided we omit the commands.
Here is the entire example, with both of these changes, and a variable objects as suggested above:
objects = main.o kbd.o command.o display.o \ insert.o search.o files.o utils.oedit : $(objects) cc -o edit $(objects)main.o : defs.hkbd.o : defs.h command.h command.o : defs.h command.hdisplay.o : defs.h buffer.h insert.o : defs.h buffer.hsearch.o : defs.h buffer.h files.o : defs.h buffer.h command.hutils.o : defs.h.PHONY : cleanclean : -rm edit $(objects)
This is how we would write the makefile in actual practice. (The complications associated with clean
are described elsewhere. See section Phony Targets, and section Errors in Commands.)
Because implicit rules are so convenient, they are important. You will see them used frequently.
Another Style of Makefile
When the objects of a makefile are created only by implicit rules, an alternative style of makefile is possible. In this style of makefile, you group entries by their prerequisites instead of by their targets. Here is what one looks like:
objects = main.o kbd.o command.o display.o \ insert.o search.o files.o utils.oedit : $(objects) cc -o edit $(objects)$(objects) : defs.h kbd.o command.o files.o : command.h display.o insert.o search.o files.o : buffer.h
Here defs.h
is given as a prerequisite of all the object files; command.h
and buffer.h
are prerequisites of the specific object files listed for them.
Whether this is better is a matter of taste: it is more compact, but some people dislike it because they find it clearer to put all the information about each target in one place.
Rules for Cleaning the Directory
Compiling a program is not the only thing you might want to write rules for. Makefiles commonly tell how to do a few other things besides compiling a program: for example, how to delete all the object files and executables so that the directory is clean
.
Here is how we could write a make rule for cleaning our example editor:
clean: rm edit $(objects)
In practice, we might want to write the rule in a somewhat more complicated manner to handle unanticipated situations. We would do this:
.PHONY : cleanclean : -rm edit $(objects)
This prevents make from getting confused by an actual file called clean
and causes it to continue in spite of errors from rm. (See section Phony Targets, and section Errors in Commands.)
A rule such as this should not be placed at the beginning of the makefile, because we do not want it to run by default! Thus, in the example makefile, we want the rule for edit, which recompiles the editor, to remain the default goal.
Since clean is not a prerequisite of edit, this rule will not run at all if we give the command make
with no arguments. In order to make the rule run, we have to type make clean
. See section How to Run make.
Writing Makefiles
The information that tells make how to recompile a system comes from reading a data base called the makefile.
What Makefiles Contain
Makefiles contain five kinds of things: explicit rules, implicit rules, variable definitions, directives, and comments. Rules, variables, and directives are described at length in later chapters.
-
An explicit rule says when and how to remake one or more files, called the rule’s targets. It lists the other files that the targets depend on, call the prerequisites of the target, and may also give commands to use to create or update the targets. See section Writing Rules.
-
An implicit rule says when and how to remake a class of files based on their names. It describes how a target may depend on a file with a name similar to the target and gives commands to create or update such a target. See section Using Implicit Rules.
-
A variable definition is a line that specifies a text string value for a variable that can be substituted into the text later. The simple makefile example shows a variable definition for objects as a list of all object files (see section Variables Make Makefiles Simpler).
-
A directive is a command for make to do something special while reading the makefile. These include:
- Reading another makefile (see section Including Other Makefiles).
- Deciding (based on the values of variables) whether to use or ignore a part of the makefile (see section Conditional Parts of Makefiles).
- Defining a variable from a verbatim string containing multiple lines (see section Defining Variables Verbatim).
-
#
in a line of a makefile starts a comment. It and the rest of the line are ignored, except that a trailing backslash not escaped by another backslash will continue the comment across multiple lines. Comments may appear on any of the lines in the makefile, except within a define directive, and perhaps within commands (where the shell decides what is a comment). A line containing just a comment (with perhaps spaces before it) is effectively blank, and is ignored.
What Name to Give Your Makefile
By default, when make looks for the makefile, it tries the following names, in order: GNUmakefile
, makefile
and Makefile
.
Normally you should call your makefile either makefile
or Makefile
. (We recommend Makefile
because it appears prominently near the beginning of a directory listing, right near other important files such as README
.) The first name checked, GNUmakefile
, is not recommended for most makefiles. You should use this name if you have a makefile that is specific to GNU make, and will not be understood by other versions of make. Other make programs look for makefile
and Makefile
, but not GNUmakefile
.
If make finds none of these names, it does not use any makefile. Then you must specify a goal with a command argument, and make will attempt to figure out how to remake it using only its built-in implicit rules. See section Using Implicit Rules.
If you want to use a nonstandard name for your makefile, you can specify the makefile name with the -f
or --file
option. The arguments -f name
or --file=name
tell make to read the file name as the makefile. If you use more than one -f
or --file
option, you can specify several makefiles. All the makefiles are effectively concatenated in the order specified. The default makefile names GNUmakefile
, makefile
and Makefile
are not checked automatically if you specify -f
or --file
.
The include directive tells make to suspend reading the current makefile and read one or more other makefiles before continuing. The directive is a line in the makefile that looks like this:
include filenames…
filenames can contain shell file name patterns.
Extra spaces are allowed and ignored at the beginning of the line, but a tab is not allowed. (If the line begins with a tab, it will be considered a command line.) Whitespace is required between include and the file names, and between file names; extra whitespace is ignored there and at the end of the directive. A comment starting with #
is allowed at the end of the line. If the file names contain any variable or function references, they are expanded. See section How to Use Variables.
For example, if you have three .mk
files, a.mk
, b.mk
, and c.mk
, and $(bar) expands to bish bash, then the following expression
include foo *.mk $(bar)
is equivalent to
include foo a.mk b.mk c.mk bish bash
When make processes an include directive, it suspends reading of the containing makefile and reads from each listed file in turn. When that is finished, make resumes reading the makefile in which the directive appears.
One occasion for using include directives is when several programs, handled by individual makefiles in various directories, need to use a common set of variable definitions (see section Setting Variables) or pattern rules (see section Defining and Redefining Pattern Rules).
Another such occasion is when you want to generate prerequisites from source files automatically; the prerequisites can be put in a file that is included by the main makefile. This practice is generally cleaner than that of somehow appending the prerequisites to the end of the main makefile as has been traditionally done with other versions of make. See section Generating Prerequisites Automatically.
If the specified name does not start with a slash, and the file is not found in the current directory, several other directories are searched. First, any directories you have specified with the -I
or --include-dir
option are searched (see section Summary of Options). Then the following directories (if they exist) are searched, in this order: prefix/include
(normally /usr/local/include
(1)) /usr/gnu/include
, /usr/local/include
, /usr/include
.
If an included makefile cannot be found in any of these directories, a warning message is generated, but it is not an immediately fatal error; processing of the makefile containing the include continues. Once it has finished reading makefiles, make will try to remake any that are out of date or don’t exist. See section How Makefiles Are Remade. Only after it has tried to find a way to remake a makefile and failed, will make diagnose the missing makefile as a fatal error.
If you want make to simply ignore a makefile which does not exist and cannot be remade, with no error message, use the -include directive instead of include, like this:
-include filenames…
This is acts like include in every way except that there is no error (not even a warning) if any of the filenames do not exist. For compatibility with some other make implementations, sinclude is another name for -include.
The Variable MAKEFILESIf the environment variable MAKEFILES is defined, make considers its value as a list of names (separated by whitespace) of additional makefiles to be read before the others. This works much like the include directive: various directories are searched for those files (see section Including Other Makefiles). In addition, the default goal is never taken from one of these makefiles and it is not an error if the files listed in MAKEFILES are not found.
The main use of MAKEFILES is in communication between recursive invocations of make (see section Recursive Use of make). It usually is not desirable to set the environment variable before a top-level invocation of make, because it is usually better not to mess with a makefile from outside. However, if you are running make without a specific makefile, a makefile in MAKEFILES can do useful things to help the built-in implicit rules work better, such as defining search paths (see section Searching Directories for Prerequisites).
Some users are tempted to set MAKEFILES in the environment automatically on login, and program makefiles to expect this to be done. This is a very bad idea, because such makefiles will fail to work if run by anyone else. It is much better to write explicit include directives in the makefiles. See section Including Other Makefiles.
How Makefiles Are Remade
Sometimes makefiles can be remade from other files, such as RCS or SCCS files. If a makefile can be remade from other files, you probably want make to get an up-to-date version of the makefile to read in.
To this end, after reading in all makefiles, make will consider each as a goal target and attempt to update it. If a makefile has a rule which says how to update it (found either in that very makefile or in another one) or if an implicit rule applies to it (see section Using Implicit Rules), it will be updated if necessary. After all makefiles have been checked, if any have actually been changed, make starts with a clean slate and reads all the makefiles over again. (It will also attempt to update each of them over again, but normally this will not change them again, since they are already up to date.)
If you know that one or more of your makefiles cannot be remade and you want to keep make from performing an implicit rule search on them, perhaps for efficiency reasons, you can use any normal method of preventing implicit rule lookup to do so. For example, you can write an explicit rule with the makefile as the target, and an empty command string (see section Using Empty Commands).
If the makefiles specify a double-colon rule to remake a file with commands but no prerequisites, that file will always be remade (see section Double-Colon Rules). In the case of makefiles, a makefile that has a double-colon rule with commands but no prerequisites will be remade every time make is run, and then again after make starts over and reads the makefiles in again. This would cause an infinite loop: make would constantly remake the makefile, and never do anything else. So, to avoid this, make will not attempt to remake makefiles which are specified as targets of a double-colon rule with commands but no prerequisites.
If you do not specify any makefiles to be read with -f
or --file
options, make will try the default makefile names; see section What Name to Give Your Makefile. Unlike makefiles explicitly requested with -f
or --file
options, make is not certain that these makefiles should exist. However, if a default makefile does not exist but can be created by running make rules, you probably want the rules to be run so that the makefile can be used.
Therefore, if none of the default makefiles exists, make will try to make each of them in the same order in which they are searched for (see section What Name to Give Your Makefile) until it succeeds in making one, or it runs out of names to try. Note that it is not an error if make cannot find or make any makefile; a makefile is not always necessary.
When you use the -t
or --touch
option (see section Instead of Executing the Commands), you would not want to use an out-of-date makefile to decide which targets to touch. So the -t
option has no effect on updating makefiles; they are really updated even if -t
is specified. Likewise, -q
(or --question
) and -n
(or --just-print
) do not prevent updating of makefiles, because an out-of-date makefile would result in the wrong output for other targets. Thus, make -f mfile -n foo
will update mfile
, read it in, and then print the commands to update foo
and its prerequisites without running them. The commands printed for foo
will be those specified in the updated contents of mfile
.
However, on occasion you might actually wish to prevent updating of even the makefiles. You can do this by specifying the makefiles as goals in the command line as well as specifying them as makefiles. When the makefile name is specified explicitly as a goal, the options -t
and so on do apply to them.
Thus, make -f mfile -n mfile foo
would read the makefile mfile
, print the commands needed to update it without actually running them, and then print the commands needed to update foo
without running them. The commands for foo
will be those specified by the existing contents of mfile
.
Overriding Part of Another Makefile
Sometimes it is useful to have a makefile that is mostly just like another makefile. You can often use the include
directive to include one in the other, and add more targets or variable definitions. However, if the two makefiles give different commands for the same target, make will not let you just do this. But there is another way.
In the containing makefile (the one that wants to include the other), you can use a match-anything pattern rule to say that to remake any target that cannot be made from the information in the containing makefile, make should look in another makefile. See section Defining and Redefining Pattern Rules, for more information on pattern rules.
For example, if you have a makefile called Makefile
that says how to make the target foo
(and other targets), you can write a makefile called GNUmakefile
that contains:
foo: frobnicate > foo%: force @$(MAKE) -f Makefile $@force: ;
If you say make foo
, make will find GNUmakefile
, read it, and see that to make foo
, it needs to run the command frobnicate > foo
. If you say make bar
, make will find no way to make bar
in GNUmakefile
, so it will use the commands from the pattern rule: make -f Makefile bar
. If Makefile
provides a rule for updating bar
, make will apply the rule. And likewise for any other target that GNUmakefile
does not say how to make.
The way this works is that the pattern rule has a pattern of just %
, so it matches any target whatever. The rule specifies a prerequisite force
, to guarantee that the commands will be run even if the target file already exists. We give force
target empty commands to prevent make from searching for an implicit rule to build it–otherwise it would apply the same match-anything rule to force
itself and create a prerequisite loop!
How make Reads a Makefile
GNU make does its work in two distinct phases. During the first phase it reads all the makefiles, included makefiles, etc. and internalizes all the variables and their values, implicit and explicit rules, and constructs a dependency graph of all the targets and their prerequisites. During the second phase, make uses these internal structures to determine what targets will need to be rebuilt and to invoke the rules necessary to do so.
It’s important to understand this two-phase approach because it has a direct impact on how variable and function expansion happens; this is often a source of some confusion when writing makefiles. Here we will present a summary of the phases in which expansion happens for different constructs within the makefile. We say that expansion is immediate if it happens during the first phase: in this case make will expand any variables or functions in that section of a construct as the makefile is parsed. We say that expansion is deferred if expansion is not performed immediately. Expansion of deferred construct is not performed until either the construct appears later in an immediate context, or until the second phase.
You may not be familiar with some of these constructs yet. You can reference this section as you become familiar with them, in later chapters.
Variable Assignment
Variable definitions are parsed as follows:
immediate = deferred immediate ?= deferred immediate := immediate immediate += deferred or immediate define immediate deferredendef
For the append operator, +=
, the right-hand side is considered immediate if the variable was previously set as a simple variable (:=
), and deferred otherwise.
Conditional Syntax
All instances of conditional syntax are parsed immediately, in their entirety; this includes the ifdef, ifeq, ifndef, and ifneq forms.
Rule Definition
A rule is always expanded the same way, regardless of the form:
immediate : immediate ; deferred deferred
That is, the target and prerequisite sections are expanded immediately, and the commands used to construct the target are always deferred. This general rule is true for explicit rules, pattern rules, suffix rules, static pattern rules, and simple prerequisite definitions.
Writing Rules
A rule appears in the makefile and says when and how to remake certain files, called the rule’s targets (most often only one per rule). It lists the other files that are the prerequisites of the target, and commands to use to create or update the target.
The order of rules is not significant, except for determining the default goal: the target for make to consider, if you do not otherwise specify one. The default goal is the target of the first rule in the first makefile. If the first rule has multiple targets, only the first target is taken as the default. There are two exceptions: a target starting with a period is not a default unless it contains one or more slashes, /
, as well; and, a target that defines a pattern rule has no effect on the default goal. (See section Defining and Redefining Pattern Rules.)
Therefore, we usually write the makefile so that the first rule is the one for compiling the entire program or all the programs described by the makefile (often with a target called all
). See section Arguments to Specify the Goals.
Rule Syntax
In general, a rule looks like this:
targets : prerequisites command …
or like this:
targets : prerequisites ; command command …
The targets are file names, separated by spaces. Wildcard characters may be used (see section Using Wildcard Characters in File Names) and a name of the form a(m)
represents member m in archive file a (see section Archive Members as Targets). Usually there is only one target per rule, but occasionally there is a reason to have more (see section Multiple Targets in a Rule).
The command lines start with a tab character. The first command may appear on the line after the prerequisites, with a tab character, or may appear on the same line, with a semicolon. Either way, the effect is the same. See section Writing the Commands in Rules.
Because dollar signs are used to start variable references, if you really want a dollar sign in a rule you must write two of them, $
(see section How to Use Variables). You may split a long line by inserting a backslash followed by a newline, but this is not required, as make places no limit on the length of a line in a makefile.
A rule tells make two things: when the targets are out of date, and how to update them when necessary.
The criterion for being out of date is specified in terms of the prerequisites, which consist of file names separated by spaces. (Wildcards and archive members (see section Using make to Update Archive Files) are allowed here too.) A target is out of date if it does not exist or if it is older than any of the prerequisites (by comparison of last-modification times). The idea is that the contents of the target file are computed based on information in the prerequisites, so if any of the prerequisites changes, the contents of the existing target file are no longer necessarily valid.
How to update is specified by commands. These are lines to be executed by the shell (normally sh
), but with some extra features (see section Writing the Commands in Rules).
Using Wildcard Characters in File Names
A single file name can specify many files using wildcard characters. The wildcard characters in make are \*
, ?
and \[...\]
, the same as in the Bourne shell. For example, \*.c
specifies a list of all the files (in the working directory) whose names end in .c
.
The character ~
at the beginning of a file name also has special significance. If alone, or followed by a slash, it represents your home directory. For example ~/bin
expands to /home/you/bin
. If the ~
is followed by a word, the string represents the home directory of the user named by that word. For example ~john/bin
expands to /home/john/bin
. On systems which don’t have a home directory for each user (such as MS-DOS or MS-Windows), this functionality can be simulated by setting the environment variable HOME.
Wildcard expansion happens automatically in targets, in prerequisites, and in commands (where the shell does the expansion). In other contexts, wildcard expansion happens only if you request it explicitly with the wildcard function.
The special significance of a wildcard character can be turned off by preceding it with a backslash. Thus, foo\\\*bar
would refer to a specific file whose name consists of foo
, an asterisk, and bar
.
Wildcard Examples
Wildcards can be used in the commands of a rule, where they are expanded by the shell. For example, here is a rule to delete all the object files:
clean: rm -f *.o
Wildcards are also useful in the prerequisites of a rule. With the following rule in the makefile, make print
will print all the .c
files that have changed since the last time you printed them:
print: *.c lpr -p $? touch print
This rule uses print
as an empty target file; see section Empty Target Files to Record Events. (The automatic variable $?
is used to print only those files that have changed; see section Automatic Variables.)
Wildcard expansion does not happen when you define a variable. Thus, if you write this:
objects = *.o
then the value of the variable objects is the actual string \*.o
. However, if you use the value of objects in a target, prerequisite or command, wildcard expansion will take place at that time. To set objects to the expansion, instead use:
objects := $(wildcard *.o)
See section The Function wildcard.
Pitfalls of Using Wildcards
Now here is an example of a naive way of using wildcard expansion, that does not do what you would intend. Suppose you would like to say that the executable file foo
is made from all the object files in the directory, and you write this:
objects = *.ofoo : $(objects) cc -o foo $(CFLAGS) $(objects)
The value of objects is the actual string \*.o
. Wildcard expansion happens in the rule for foo
, so that each existing .o
file becomes a prerequisite of foo
and will be recompiled if necessary.
But what if you delete all the .o
files? When a wildcard matches no files, it is left as it is, so then foo
will depend on the oddly-named file \*.o
. Since no such file is likely to exist, make will give you an error saying it cannot figure out how to make \*.o
. This is not what you want!
Actually it is possible to obtain the desired result with wildcard expansion, but you need more sophisticated techniques, including the wildcard function and string substitution. These are described in the following section.
Microsoft operating systems (MS-DOS and MS-Windows) use backslashes to separate directories in pathnames, like so:
c:\foo\bar\baz.c
This is equivalent to the Unix-style c:/foo/bar/baz.c
(the c:
part is the so-called drive letter). When make runs on these systems, it supports backslashes as well as the Unix-style forward slashes in pathnames. However, this support does not include the wildcard expansion, where backslash is a quote character. Therefore, you must use Unix-style slashes in these cases.
The Function wildcard
Wildcard expansion happens automatically in rules. But wildcard expansion does not normally take place when a variable is set, or inside the arguments of a function. If you want to do wildcard expansion in such places, you need to use the wildcard function, like this:
$(wildcard pattern…)
This string, used anywhere in a makefile, is replaced by a space-separated list of names of existing files that match one of the given file name patterns. If no existing file name matches a pattern, then that pattern is omitted from the output of the wildcard function. Note that this is different from how unmatched wildcards behave in rules, where they are used verbatim rather than ignored (see section Pitfalls of Using Wildcards).
One use of the wildcard function is to get a list of all the C source files in a directory, like this:
$(wildcard *.c)
We can change the list of C source files into a list of object files by replacing the .c
suffix with .o
in the result, like this:
$(patsubst %.c,%.o,$(wildcard *.c))
(Here we have used another function, patsubst. See section Functions for String Substitution and Analysis.)
Thus, a makefile to compile all C source files in the directory and then link them together could be written as follows:
objects := $(patsubst %.c,%.o,$(wildcard *.c))foo : $(objects) cc -o foo $(objects)
(This takes advantage of the implicit rule for compiling C programs, so there is no need to write explicit rules for compiling the files. See section The Two Flavors of Variables, for an explanation of :=
, which is a variant of =
.)
Searching Directories for Prerequisites
For large systems, it is often desirable to put sources in a separate directory from the binaries. The directory search features of make facilitate this by searching several directories automatically to find a prerequisite. When you redistribute the files among directories, you do not need to change the individual rules, just the search paths.
VPATH: Search Path for All Prerequisites
The value of the make variable VPATH specifies a list of directories that make should search. Most often, the directories are expected to contain prerequisite files that are not in the current directory; however, VPATH specifies a search list that make applies for all files, including files which are targets of rules.
Thus, if a file that is listed as a target or prerequisite does not exist in the current directory, make searches the directories listed in VPATH for a file with that name. If a file is found in one of them, that file may become the prerequisite (see below). Rules may then specify the names of files in the prerequisite list as if they all existed in the current directory. See section Writing Shell Commands with Directory Search.
In the VPATH variable, directory names are separated by colons or blanks. The order in which directories are listed is the order followed by make in its search. (On MS-DOS and MS-Windows, semi-colons are used as separators of directory names in VPATH, since the colon can be used in the pathname itself, after the drive letter.)
For example,
VPATH = src:../headers
specifies a path containing two directories, src
and ../headers
, which make searches in that order.
With this value of VPATH, the following rule,
foo.o : foo.c
is interpreted as if it were written like this:
foo.o : src/foo.c
assuming the file foo.c
does not exist in the current directory but is found in the directory src
.
The vpath Directive
Similar to the VPATH variable, but more selective, is the vpath directive (note lower case), which allows you to specify a search path for a particular class of file names: those that match a particular pattern. Thus you can supply certain search directories for one class of file names and other directories (or none) for other file names.
There are three forms of the vpath directive:
vpath pattern directories
Specify the search path directories for file names that match pattern. The search path, directories, is a list of directories to be searched, separated by colons (semi-colons on MS-DOS and MS-Windows) or blanks, just like the search path used in the VPATH variable.
vpath pattern
Clear out the search path associated with pattern.
vpath
Clear all search paths previously specified with vpath directives.
A vpath pattern is a string containing a %
character. The string must match the file name of a prerequisite that is being searched for, the %
character matching any sequence of zero or more characters (as in pattern rules; see section Defining and Redefining Pattern Rules). For example, %.h matches files that end in .h. (If there is no %
, the pattern must match the prerequisite exactly, which is not useful very often.)
%
characters in a vpath directive’s pattern can be quoted with preceding backslashes (\`). Backslashes that would otherwise quote
%characters can be quoted with more backslashes. Backslashes that quote
%characters or other backslashes are removed from the pattern before it is compared to file names. Backslashes that are not in danger of quoting
%` characters go unmolested.
When a prerequisite fails to exist in the current directory, if the pattern in a vpath directive matches the name of the prerequisite file, then the directories in that directive are searched just like (and before) the directories in the VPATH variable.
For example,
vpath %.h ../headers
tells make to look for any prerequisite whose name ends in .h
in the directory ../headers
if the file is not found in the current directory.
If several vpath patterns match the prerequisite file’s name, then make processes each matching vpath directive one by one, searching all the directories mentioned in each directive. make handles multiple vpath directives in the order in which they appear in the makefile; multiple directives with the same pattern are independent of each other.
Thus,
vpath %.c foovpath % blishvpath %.c bar
will look for a file ending in .c
in foo
, then blish
, then bar
, while
vpath %.c foo:barvpath % blish
will look for a file ending in .c
in foo
, then bar
, then blish
.
How Directory Searches are Performed
When a prerequisite is found through directory search, regardless of type (general or selective), the pathname located may not be the one that make actually provides you in the prerequisite list. Sometimes the path discovered through directory search is thrown away.
The algorithm make uses to decide whether to keep or abandon a path found via directory search is as follows:
- If a target file does not exist at the path specified in the makefile, directory search is performed.
- If the directory search is successful, that path is kept and this file is tentatively stored as the target.
- All prerequisites of this target are examined using this same method.
- After processing the prerequisites, the target may or may not need to be rebuilt:
- If the target does not need to be rebuilt, the path to the file found during directory search is used for any prerequisite lists which contain this target. In short, if make doesn’t need to rebuild the target then you use the path found via directory search.
- If the target does need to be rebuilt (is out-of-date), the pathname found during directory search is thrown away, and the target is rebuilt using the file name specified in the makefile. In short, if make must rebuild, then the target is rebuilt locally, not in the directory found via directory search.
This algorithm may seem complex, but in practice it is quite often exactly what you want.
Other versions of make use a simpler algorithm: if the file does not exist, and it is found via directory search, then that pathname is always used whether or not the target needs to be built. Thus, if the target is rebuilt it is created at the pathname discovered during directory search.
If, in fact, this is the behavior you want for some or all of your directories, you can use the GPATH variable to indicate this to make.
GPATH has the same syntax and format as VPATH (that is, a space- or colon-delimited list of pathnames). If an out-of-date target is found by directory search in a directory that also appears in GPATH, then that pathname is not thrown away. The target is rebuilt using the expanded path.
Writing Shell Commands with Directory Search
When a prerequisite is found in another directory through directory search, this cannot change the commands of the rule; they will execute as written. Therefore, you must write the commands with care so that they will look for the prerequisite in the directory where make finds it.
This is done with the automatic variables such as $^
(see section Automatic Variables). For instance, the value of $^
is a list of all the prerequisites of the rule, including the names of the directories in which they were found, and the value of $@
is the target. Thus:
foo.o : foo.c cc -c $(CFLAGS) $^ -o $@
(The variable CFLAGS exists so you can specify flags for C compilation by implicit rules; we use it here for consistency so it will affect all C compilations uniformly; see section Variables Used by Implicit Rules.)
Often the prerequisites include header files as well, which you do not want to mention in the commands. The automatic variable $\<
is just the first prerequisite:
VPATH = src:../headersfoo.o : foo.c defs.h hack.h cc -c $(CFLAGS) $< -o $@
Directory Search and Implicit Rules
The search through the directories specified in VPATH or with vpath also happens during consideration of implicit rules (see section Using Implicit Rules).
For example, when a file foo.o
has no explicit rule, make considers implicit rules, such as the built-in rule to compile foo.c
if that file exists. If such a file is lacking in the current directory, the appropriate directories are searched for it. If foo.c
exists (or is mentioned in the makefile) in any of the directories, the implicit rule for C compilation is applied.
The commands of implicit rules normally use automatic variables as a matter of necessity; consequently they will use the file names found by directory search with no extra effort.
Directory Search for Link Libraries
Directory search applies in a special way to libraries used with the linker. This special feature comes into play when you write a prerequisite whose name is of the form -lname
. (You can tell something strange is going on here because the prerequisite is normally the name of a file, and the file name of a library generally looks like libname.a
, not like -lname
.)
When a prerequisite’s name has the form -lname
, make handles it specially by searching for the file libname.so
in the current directory, in directories specified by matching vpath search paths and the VPATH search path, and then in the directories /lib
, /usr/lib
, and prefix/lib
(normally /usr/local/lib
, but MS-DOS/MS-Windows versions of make behave as if prefix is defined to be the root of the DJGPP installation tree).
If that file is not found, then the file libname.a
is searched for, in the same directories as above.
For example, if there is a /usr/lib/libcurses.a
library on your system (and no /usr/lib/libcurses.so
file), then
foo : foo.c -lcurses cc $^ -o $@
would cause the command cc foo.c /usr/lib/libcurses.a -o foo
to be executed when foo
is older than foo.c
or than /usr/lib/libcurses.a
.
Although the default set of files to be searched for is libname.so
and libname.a
, this is customizable via the .LIBPATTERNS variable. Each word in the value of this variable is a pattern string. When a prerequisite like -lname
is seen, make will replace the percent in each pattern in the list with name and perform the above directory searches using that library filename. If no library is found, the next word in the list will be used.
The default value for .LIBPATTERNS is “lib%.so lib%.a
”, which provides the default behavior described above.
You can turn off link library expansion completely by setting this variable to an empty value.
Phony Targets
A phony target is one that is not really the name of a file. It is just a name for some commands to be executed when you make an explicit request. There are two reasons to use a phony target: to avoid a conflict with a file of the same name, and to improve performance.
If you write a rule whose commands will not create the target file, the commands will be executed every time the target comes up for remaking. Here is an example:
clean: rm *.o temp
Because the rm command does not create a file named clean
, probably no such file will ever exist. Therefore, the rm command will be executed every time you say make clean
.
The phony target will cease to work if anything ever does create a file named clean
in this directory. Since it has no prerequisites, the file clean
would inevitably be considered up to date, and its commands would not be executed. To avoid this problem, you can explicitly declare the target to be phony, using the special target .PHONY (see section Special Built-in Target Names) as follows:
.PHONY : clean
Once this is done, make clean
will run the commands regardless of whether there is a file named clean
.
Since it knows that phony targets do not name actual files that could be remade from other files, make skips the implicit rule search for phony targets (see section Using Implicit Rules). This is why declaring a target phony is good for performance, even if you are not worried about the actual file existing.
Thus, you first write the line that states that clean is a phony target, then you write the rule, like this:
.PHONY: clean clean: rm *.o temp
Another example of the usefulness of phony targets is in conjunction with recursive invocations of make. In this case the makefile will often contain a variable which lists a number of subdirectories to be built. One way to handle this is with one rule whose command is a shell loop over the subdirectories, like this:
SUBDIRS = foo bar baz subdirs: for dir in $(SUBDIRS); do \ $(MAKE) -C $$dir; \ done
There are a few of problems with this method, however. First, any error detected in a submake is not noted by this rule, so it will continue to build the rest of the directories even when one fails. This can be overcome by adding shell commands to note the error and exit, but then it will do so even if make is invoked with the -k option, which is unfortunate. Second, and perhaps more importantly, you cannot take advantage of the parallel build capabilities of make using this method, since there is only one rule.
By declaring the subdirectories as phony targets (you must do this as the subdirectory obviously always exists; otherwise it won’t be built) you can remove these problems:
SUBDIRS = foo bar baz .PHONY: subdirs $(SUBDIRS) subdirs: $(SUBDIRS) $(SUBDIRS): $(MAKE) -C $ foo: baz
Here we’ve also declared that the foo
subdirectory cannot be built until after the baz
subdirectory is complete; this kind of relationship declaration is particularly important when attempting parallel builds.
A phony target should not be a prerequisite of a real target file; if it is, its commands are run every time make goes to update that file. As long as a phony target is never a prerequisite of a real target, the phony target commands will be executed only when the phony target is a specified goal (see section Arguments to Specify the Goals).
Phony targets can have prerequisites. When one directory contains multiple programs, it is most convenient to describe all of the programs in one makefile ./Makefile
. Since the target remade by default will be the first one in the makefile, it is common to make this a phony target named all
and give it, as prerequisites, all the individual programs. For example:
all : prog1 prog2 prog3 .PHONY : all prog1 : prog1.o utils.o cc -o prog1 prog1.o utils.o prog2 : prog2.o cc -o prog2 prog2.o prog3 : prog3.o sort.o utils.o cc -o prog3 prog3.o sort.o utils.o
Now you can say just make
to remake all three programs, or specify as arguments the ones to remake (as in make prog1 prog3
).
When one phony target is a prerequisite of another, it serves as a subroutine of the other. For example, here make cleanall
will delete the object files, the difference files, and the file program
:
.PHONY: cleanall cleanobj cleandiff cleanall : cleanobj cleandiff rm program cleanobj : rm *.o cleandiff : rm *.diff
Rules without Commands or Prerequisites
If a rule has no prerequisites or commands, and the target of the rule is a nonexistent file, then make imagines this target to have been updated whenever its rule is run. This implies that all targets depending on this one will always have their commands run.
An example will illustrate this:
clean: FORCE rm $(objects) FORCE:
Here the target FORCE
satisfies the special conditions, so the target clean
that depends on it is forced to run its commands. There is nothing special about the name FORCE
, but that is one name commonly used this way.
As you can see, using FORCE
this way has the same results as using .PHONY: clean
.
Using .PHONY
is more explicit and more efficient. However, other versions of make do not support .PHONY
; thus FORCE
appears in many makefiles. See section Phony Targets.
Empty Target Files to Record Events
The empty target is a variant of the phony target; it is used to hold commands for an action that you request explicitly from time to time. Unlike a phony target, this target file can really exist; but the file’s contents do not matter, and usually are empty.
The purpose of the empty target file is to record, with its last-modification time, when the rule’s commands were last executed. It does so because one of the commands is a touch command to update the target file.
The empty target file should have some prerequisites (otherwise it doesn’t make sense). When you ask to remake the empty target, the commands are executed if any prerequisite is more recent than the target; in other words, if a prerequisite has changed since the last time you remade the target. Here is an example:
print: foo.c bar.c lpr -p $? touch print
With this rule, make print
will execute the lpr command if either source file has changed since the last make print
. The automatic variable $?
is used to print only those files that have changed (see section Automatic Variables).
Special Built-in Target Names
Certain names have special meanings if they appear as targets.
.PHONY
The prerequisites of the special target .PHONY are considered to be phony targets. When it is time to consider such a target, make will run its commands unconditionally, regardless of whether a file with that name exists or what its last-modification time is. See section Phony Targets.
.SUFFIXES
The prerequisites of the special target .SUFFIXES are the list of suffixes to be used in checking for suffix rules. See section Old-Fashioned Suffix Rules.
.DEFAULT
The commands specified for .DEFAULT are used for any target for which no rules are found (either explicit rules or implicit rules). See section Defining Last-Resort Default Rules. If .DEFAULT commands are specified, every file mentioned as a prerequisite, but not as a target in a rule, will have these commands executed on its behalf. See section Implicit Rule Search Algorithm.
.PRECIOUS
The targets which .PRECIOUS depends on are given the following special treatment: if make is killed or interrupted during the execution of their commands, the target is not deleted. See section Interrupting or Killing make. Also, if the target is an intermediate file, it will not be deleted after it is no longer needed, as is normally done. See section Chains of Implicit Rules. In this latter respect it overlaps with the .SECONDARY special target. You can also list the target pattern of an implicit rule (such as %.o
) as a prerequisite file of the special target .PRECIOUS to preserve intermediate files created by rules whose target patterns match that file’s name.
.INTERMEDIATE
The targets which .INTERMEDIATE depends on are treated as intermediate files. See section Chains of Implicit Rules. .INTERMEDIATE with no prerequisites has no effect.
.SECONDARY
The targets which .SECONDARY depends on are treated as intermediate files, except that they are never automatically deleted. See section Chains of Implicit Rules. .SECONDARY with no prerequisites causes all targets to be treated as secondary (i.e., no target is removed because it is considered intermediate).
.DELETE_ON_ERROR
If .DELETE_ON_ERROR is mentioned as a target anywhere in the makefile, then make will delete the target of a rule if it has changed and its commands exit with a nonzero exit status, just as it does when it receives a signal. See section Errors in Commands.
.IGNORE
If you specify prerequisites for .IGNORE, then make will ignore errors in execution of the commands run for those particular files. The commands for .IGNORE are not meaningful. If mentioned as a target with no prerequisites, .IGNORE says to ignore errors in execution of commands for all files. This usage of .IGNORE
is supported only for historical compatibility. Since this affects every command in the makefile, it is not very useful; we recommend you use the more selective ways to ignore errors in specific commands. See section Errors in Commands.
.SILENT
If you specify prerequisites for .SILENT, then make will not print the commands to remake those particular files before executing them. The commands for .SILENT are not meaningful. If mentioned as a target with no prerequisites, .SILENT says not to print any commands before executing them. This usage of .SILENT
is supported only for historical compatibility. We recommend you use the more selective ways to silence specific commands. See section Command Echoing. If you want to silence all commands for a particular run of make, use the -s
or --silent
option (see section Summary of Options).
.EXPORT_ALL_VARIABLES
Simply by being mentioned as a target, this tells make to export all variables to child processes by default. See section Communicating Variables to a Sub-make.
.NOTPARALLEL
If .NOTPARALLEL is mentioned as a target, then this invocation of make will be run serially, even if the -j
option is given. Any recursively invoked make command will still be run in parallel (unless its makefile contains this target). Any prerequisites on this target are ignored.
Any defined implicit rule suffix also counts as a special target if it appears as a target, and so does the concatenation of two suffixes, such as .c.o
. These targets are suffix rules, an obsolete way of defining implicit rules (but a way still widely used). In principle, any target name could be special in this way if you break it in two and add both pieces to the suffix list. In practice, suffixes normally begin with .
, so these special target names also begin with .
. See section Old-Fashioned Suffix Rules.
Multiple Targets in a Rule
A rule with multiple targets is equivalent to writing many rules, each with one target, and all identical aside from that. The same commands apply to all the targets, but their effects may vary because you can substitute the actual target name into the command using $@
. The rule contributes the same prerequisites to all the targets also.
This is useful in two cases.
- You want just prerequisites, no commands. For example:
kbd.o command.o files.o: command.h
gives an additional prerequisite to each of the three object files mentioned.
- Similar commands work for all the targets. The commands do not need to be absolutely identical, since the automatic variable
$@
can be used to substitute the particular target to be remade into the commands (see section Automatic Variables). For example:
bigoutput littleoutput : text.g generate text.g -$(subst output,,$@) > $@
bigoutput : text.g generate text.g -big > bigoutput littleoutput : text.g generate text.g -little > littleoutput
Here we assume the hypothetical program generate makes two types of output, one if given -big
and one if given -little
. See section Functions for String Substitution and Analysis, for an explanation of the subst function.
Suppose you would like to vary the prerequisites according to the target, much as the variable $@
allows you to vary the commands. You cannot do this with multiple targets in an ordinary rule, but you can do it with a static pattern rule. See section Static Pattern Rules.
Multiple Rules for One Target
One file can be the target of several rules. All the prerequisites mentioned in all the rules are merged into one list of prerequisites for the target. If the target is older than any prerequisite from any rule, the commands are executed.
There can only be one set of commands to be executed for a file. If more than one rule gives commands for the same file, make uses the last set given and prints an error message. (As a special case, if the file’s name begins with a dot, no error message is printed. This odd behavior is only for compatibility with other implementations of make.) There is no reason to write your makefiles this way; that is why make gives you an error message.
An extra rule with just prerequisites can be used to give a few extra prerequisites to many files at once. For example, one usually has a variable named objects containing a list of all the compiler output files in the system being made. An easy way to say that all of them must be recompiled if config.h
changes is to write the following:
objects = foo.o bar.o foo.o : defs.h bar.o : defs.h test.h $(objects) : config.h
This could be inserted or taken out without changing the rules that really specify how to make the object files, making it a convenient form to use if you wish to add the additional prerequisite intermittently.
Another wrinkle is that the additional prerequisites could be specified with a variable that you set with a command argument to make (see section Overriding Variables). For example,
extradeps= $(objects) : $(extradeps)
means that the command make extradeps=foo.h
will consider foo.h
as a prerequisite of each object file, but plain make
will not.
If none of the explicit rules for a target has commands, then make searches for an applicable implicit rule to find some commands see section Using Implicit Rules).
Static Pattern Rules
Static pattern rules are rules which specify multiple targets and construct the prerequisite names for each target based on the target name. They are more general than ordinary rules with multiple targets because the targets do not have to have identical prerequisites. Their prerequisites must be analogous, but not necessarily identical.
Syntax of Static Pattern Rules
Here is the syntax of a static pattern rule:
targets …: target-pattern: dep-patterns … commands …
The targets list specifies the targets that the rule applies to. The targets can contain wildcard characters, just like the targets of ordinary rules (see section Using Wildcard Characters in File Names).
The target-pattern and dep-patterns say how to compute the prerequisites of each target. Each target is matched against the target-pattern to extract a part of the target name, called the stem. This stem is substituted into each of the dep-patterns to make the prerequisite names (one from each dep-pattern).
Each pattern normally contains the character %
just once. When the target-pattern matches a target, the %
can match any part of the target name; this part is called the stem. The rest of the pattern must match exactly. For example, the target foo.o
matches the pattern %.o
, with foo
as the stem. The targets foo.c
and foo.out
do not match that pattern.
The prerequisite names for each target are made by substituting the stem for the %
in each prerequisite pattern. For example, if one prerequisite pattern is %.c
, then substitution of the stem foo
gives the prerequisite name foo.c
. It is legitimate to write a prerequisite pattern that does not contain %
; then this prerequisite is the same for all targets.
%
characters in pattern rules can be quoted with preceding backslashes (\`). Backslashes that would otherwise quote
%characters can be quoted with more backslashes. Backslashes that quote
%characters or other backslashes are removed from the pattern before it is compared to file names or has a stem substituted into it. Backslashes that are not in danger of quoting
%characters go unmolested. For example, the pattern
the\%weird\\%pattern\` has the%weird\` preceding the operative
%character, and
pattern\` following it. The final two backslashes are left alone because they cannot affect any %
character.
Here is an example, which compiles each of foo.o
and bar.o
from the corresponding .c
file:
objects = foo.o bar.o all: $(objects) $(objects): %.o: %.c $(CC) -c $(CFLAGS) $< -o $@
Here $\<
is the automatic variable that holds the name of the prerequisite and $@
is the automatic variable that holds the name of the target; see section Automatic Variables.
Each target specified must match the target pattern; a warning is issued for each target that does not. If you have a list of files, only some of which will match the pattern, you can use the filter function to remove nonmatching file names (see section Functions for String Substitution and Analysis):
files = foo.elc bar.o lose.o $(filter %.o,$(files)): %.o: %.c $(CC) -c $(CFLAGS) $< -o $@ $(filter %.elc,$(files)): %.elc: %.el emacs -f batch-byte-compile $<
In this example the result of $(filter %.o,$(files))
is bar.o lose.o
, and the first static pattern rule causes each of these object files to be updated by compiling the corresponding C source file. The result of $(filter %.elc,$(files))
is foo.elc
, so that file is made from foo.el
.
Another example shows how to use $* in static pattern rules:
bigoutput littleoutput : %output : text.g generate text.g -$* > $@
When the generate command is run, $* will expand to the stem, either big
or little
.
Static Pattern Rules versus Implicit Rules
A static pattern rule has much in common with an implicit rule defined as a pattern rule (see section Defining and Redefining Pattern Rules). Both have a pattern for the target and patterns for constructing the names of prerequisites. The difference is in how make decides when the rule applies.
An implicit rule can apply to any target that matches its pattern, but it does apply only when the target has no commands otherwise specified, and only when the prerequisites can be found. If more than one implicit rule appears applicable, only one applies; the choice depends on the order of rules.
By contrast, a static pattern rule applies to the precise list of targets that you specify in the rule. It cannot apply to any other target and it invariably does apply to each of the targets specified. If two conflicting rules apply, and both have commands, that’s an error.
The static pattern rule can be better than an implicit rule for these reasons:
- You may wish to override the usual implicit rule for a few files whose names cannot be categorized syntactically but can be given in an explicit list.
- If you cannot be sure of the precise contents of the directories you are using, you may not be sure which other irrelevant files might lead make to use the wrong implicit rule. The choice might depend on the order in which the implicit rule search is done. With static pattern rules, there is no uncertainty: each rule applies to precisely the targets specified.
Double-Colon Rules
Double-colon rules are rules written with ::
instead of :
after the target names. They are handled differently from ordinary rules when the same target appears in more than one rule.
When a target appears in multiple rules, all the rules must be the same type: all ordinary, or all double-colon. If they are double-colon, each of them is independent of the others. Each double-colon rule’s commands are executed if the target is older than any prerequisites of that rule. This can result in executing none, any, or all of the double-colon rules.
Double-colon rules with the same target are in fact completely separate from one another. Each double-colon rule is processed individually, just as rules with different targets are processed.
The double-colon rules for a target are executed in the order they appear in the makefile. However, the cases where double-colon rules really make sense are those where the order of executing the commands would not matter.
Double-colon rules are somewhat obscure and not often very useful; they provide a mechanism for cases in which the method used to update a target differs depending on which prerequisite files caused the update, and such cases are rare.
Each double-colon rule should specify commands; if it does not, an implicit rule will be used if one applies. See section Using Implicit Rules.
Generating Prerequisites Automatically
In the makefile for a program, many of the rules you need to write often say only that some object file depends on some header file. For example, if main.c
uses defs.h
via an #include, you would write:
main.o: defs.h
You need this rule so that make knows that it must remake main.o
whenever defs.h
changes. You can see that for a large program you would have to write dozens of such rules in your makefile. And, you must always be very careful to update the makefile every time you add or remove an #include.
To avoid this hassle, most modern C compilers can write these rules for you, by looking at the #include lines in the source files. Usually this is done with the -M
option to the compiler. For example, the command:
cc -M main.c
generates the output:
main.o : main.c defs.h
Thus you no longer have to write all those rules yourself. The compiler will do it for you.
Note that such a prerequisite constitutes mentioning main.o
in a makefile, so it can never be considered an intermediate file by implicit rule search. This means that make won’t ever remove the file after using it; see section Chains of Implicit Rules.
With old make programs, it was traditional practice to use this compiler feature to generate prerequisites on demand with a command like make depend
. That command would create a file depend
containing all the automatically-generated prerequisites; then the makefile could use include to read them in (see section Including Other Makefiles).
In GNU make, the feature of remaking makefiles makes this practice obsolete–you need never tell make explicitly to regenerate the prerequisites, because it always regenerates any makefile that is out of date. See section How Makefiles Are Remade.
The practice we recommend for automatic prerequisite generation is to have one makefile corresponding to each source file. For each source file name.c
there is a makefile name.d
which lists what files the object file name.o
depends on. That way only the source files that have changed need to be rescanned to produce the new prerequisites.
Here is the pattern rule to generate a file of prerequisites (i.e., a makefile) called name.d
from a C source file called name.c
:
%.d: %.c set -e; $(CC) -M $(CPPFLAGS) $< \ | sed ’s/\($*\)\.o[ :]*/\1.o $@ : /g’ > $@; \ [ -s $@ ] || rm -f $@
See section Defining and Redefining Pattern Rules, for information on defining pattern rules. The -e
flag to the shell makes it exit immediately if the $(CC) command fails (exits with a nonzero status). Normally the shell exits with the status of the last command in the pipeline (sed in this case), so make would not notice a nonzero status from the compiler.
With the GNU C compiler, you may wish to use the -MM
flag instead of -M
. This omits prerequisites on system header files. See section `Options Controlling the Preprocessor’ in Using GNU CC, for details.
The purpose of the sed command is to translate (for example):
main.o : main.c defs.h
into:
main.o main.d : main.c defs.h
This makes each .d
file depend on all the source and header files that the corresponding .o
file depends on. make then knows it must regenerate the prerequisites whenever any of the source or header files changes.
Once you’ve defined the rule to remake the .d
files, you then use the include directive to read them all in. See section Including Other Makefiles. For example:
sources = foo.c bar.c include $(sources:.c=.d)
(This example uses a substitution variable reference to translate the list of source files foo.c bar.c
into a list of prerequisite makefiles, foo.d bar.d
. See section Substitution References, for full information on substitution references.) Since the .d
files are makefiles like any others, make will remake them as necessary with no further work from you. See section How Makefiles Are Remade.
Writing the Commands in Rules
The commands of a rule consist of shell command lines to be executed one by one. Each command line must start with a tab, except that the first command line may be attached to the target-and-prerequisites line with a semicolon in between. Blank lines and lines of just comments may appear among the command lines; they are ignored. (But beware, an apparently “blank” line that begins with a tab is not blank! It is an empty command; see section Using Empty Commands.)
Users use many different shell programs, but commands in makefiles are always interpreted by /bin/sh
unless the makefile specifies otherwise. See section Command Execution.
The shell that is in use determines whether comments can be written on command lines, and what syntax they use. When the shell is /bin/sh
, a #
starts a comment that extends to the end of the line. The #
does not have to be at the beginning of a line. Text on a line before a #
is not part of the comment.
Command Echoing
Normally make prints each command line before it is executed. We call this echoing because it gives the appearance that you are typing the commands yourself.
When a line starts with @
, the echoing of that line is suppressed. The @
is discarded before the command is passed to the shell. Typically you would use this for a command whose only effect is to print something, such as an echo command to indicate progress through the makefile:
@echo About to make distribution files
When make is given the flag -n
or --just-print
it only echoes commands, it won’t execute them. See section Summary of Options. In this case and only this case, even the commands starting with @
are printed. This flag is useful for finding out which commands make thinks are necessary without actually doing them.
The -s
or --silent
flag to make prevents all echoing, as if all commands started with @
. A rule in the makefile for the special target .SILENT without prerequisites has the same effect (see section Special Built-in Target Names). .SILENT is essentially obsolete since @
is more flexible.
Command Execution
When it is time to execute commands to update a target, they are executed by making a new subshell for each line. (In practice, make may take shortcuts that do not affect the results.)
Please note: this implies that shell commands such as cd that set variables local to each process will not affect the following command lines.
(2)If you want to use cd to affect the next command, put the two on a single line with a semicolon between them. Then make will consider them a single command and pass them, together, to a shell which will execute them in sequence. For example:
foo : bar/lose cd bar; gobble lose > ../foo
If you would like to split a single shell command into multiple lines of text, you must use a backslash at the end of all but the last subline. Such a sequence of lines is combined into a single line, by deleting the backslash-newline sequences, before passing it to the shell. Thus, the following is equivalent to the preceding example:
foo : bar/lose cd bar; \ gobble lose > ../foo
The program used as the shell is taken from the variable SHELL. By default, the program /bin/sh
is used.
On MS-DOS, if SHELL is not set, the value of the variable COMSPEC (which is always set) is used instead.
The processing of lines that set the variable SHELL in Makefiles is different on MS-DOS. The stock shell, command.com
, is ridiculously limited in its functionality and many users of make tend to install a replacement shell. Therefore, on MS-DOS, make examines the value of SHELL, and changes its behavior based on whether it points to a Unix-style or DOS-style shell. This allows reasonable functionality even if SHELL points to command.com
.
If SHELL points to a Unix-style shell, make on MS-DOS additionally checks whether that shell can indeed be found; if not, it ignores the line that sets SHELL. In MS-DOS, GNU make searches for the shell in the following places:
- In the precise place pointed to by the value of SHELL. For example, if the makefile specifies
SHELL = /bin/sh
, make will look in the directory/bin
on the current drive. - In the current directory.
- In each of the directories in the PATH variable, in order.
In every directory it examines, make will first look for the specific file (sh
in the example above). If this is not found, it will also look in that directory for that file with one of the known extensions which identify executable files. For example .exe
, .com
, .bat
, .btm
, .sh
, and some others.
If any of these attempts is successful, the value of SHELL will be set to the full pathname of the shell as found. However, if none of these is found, the value of SHELL will not be changed, and thus the line that sets it will be effectively ignored. This is so make will only support features specific to a Unix-style shell if such a shell is actually installed on the system where make runs.
Note that this extended search for the shell is limited to the cases where SHELL is set from the Makefile; if it is set in the environment or command line, you are expected to set it to the full pathname of the shell, exactly as things are on Unix.
The effect of the above DOS-specific processing is that a Makefile that says SHELL = /bin/sh
(as many Unix makefiles do), will work on MS-DOS unaltered if you have e.g. sh.exe
installed in some directory along your PATH.
Unlike most variables, the variable SHELL is never set from the environment. This is because the SHELL environment variable is used to specify your personal choice of shell program for interactive use. It would be very bad for personal choices like this to affect the functioning of makefiles. See section Variables from the Environment. However, on MS-DOS and MS-Windows the value of SHELL in the environment is used, since on those systems most users do not set this variable, and therefore it is most likely set specifically to be used by make. On MS-DOS, if the setting of SHELL is not suitable for make, you can set the variable MAKESHELL to the shell that make should use; this will override the value of SHELL.
Parallel Execution
GNU make knows how to execute several commands at once. Normally, make will execute only one command at a time, waiting for it to finish before executing the next. However, the -j
or --jobs
option tells make to execute many commands simultaneously.
On MS-DOS, the -j
option has no effect, since that system doesn’t support multi-processing.
If the -j
option is followed by an integer, this is the number of commands to execute at once; this is called the number of job slots. If there is nothing looking like an integer after the -j
option, there is no limit on the number of job slots. The default number of job slots is one, which means serial execution (one thing at a time).
One unpleasant consequence of running several commands simultaneously is that output generated by the commands appears whenever each command sends it, so messages from different commands may be interspersed.
Another problem is that two processes cannot both take input from the same device; so to make sure that only one command tries to take input from the terminal at once, make will invalidate the standard input streams of all but one running command. This means that attempting to read from standard input will usually be a fatal error (a Broken pipe
signal) for most child processes if there are several.
It is unpredictable which command will have a valid standard input stream (which will come from the terminal, or wherever you redirect the standard input of make). The first command run will always get it first, and the first command started after that one finishes will get it next, and so on.
We will change how this aspect of make works if we find a better alternative. In the mean time, you should not rely on any command using standard input at all if you are using the parallel execution feature; but if you are not using this feature, then standard input works normally in all commands.
Finally, handling recursive make invocations raises issues. For more information on this, see section Communicating Options to a Sub-make.
If a command fails (is killed by a signal or exits with a nonzero status), and errors are not ignored for that command (see section Errors in Commands), the remaining command lines to remake the same target will not be run. If a command fails and the -k
or --keep-going
option was not given (see section Summary of Options), make aborts execution. If make terminates for any reason (including a signal) with child processes running, it waits for them to finish before actually exiting.
When the system is heavily loaded, you will probably want to run fewer jobs than when it is lightly loaded. You can use the -l
option to tell make to limit the number of jobs to run at once, based on the load average. The -l
or --max-load
option is followed by a floating-point number. For example,
-l 2.5
will not let make start more than one job if the load average is above 2.5. The -l
option with no following number removes the load limit, if one was given with a previous -l
option.
More precisely, when make goes to start up a job, and it already has at least one job running, it checks the current load average; if it is not lower than the limit given with -l
, make waits until the load average goes below that limit, or until all the other jobs finish.
By default, there is no load limit.
Errors in Commands
After each shell command returns, make looks at its exit status. If the command completed successfully, the next command line is executed in a new shell; after the last command line is finished, the rule is finished.
If there is an error (the exit status is nonzero), make gives up on the current rule, and perhaps on all rules.
Sometimes the failure of a certain command does not indicate a problem. For example, you may use the mkdir command to ensure that a directory exists. If the directory already exists, mkdir will report an error, but you probably want make to continue regardless.
To ignore errors in a command line, write a -
at the beginning of the line’s text (after the initial tab). The -
is discarded before the command is passed to the shell for execution.
For example,
clean: -rm -f *.o
This causes rm to continue even if it is unable to remove a file.
When you run make with the -i
or --ignore-errors
flag, errors are ignored in all commands of all rules. A rule in the makefile for the special target .IGNORE has the same effect, if there are no prerequisites. These ways of ignoring errors are obsolete because -
is more flexible.
When errors are to be ignored, because of either a -
or the -i
flag, make treats an error return just like success, except that it prints out a message that tells you the status code the command exited with, and says that the error has been ignored.
When an error happens that make has not been told to ignore, it implies that the current target cannot be correctly remade, and neither can any other that depends on it either directly or indirectly. No further commands will be executed for these targets, since their preconditions have not been achieved.
Normally make gives up immediately in this circumstance, returning a nonzero status. However, if the -k
or --keep-going
flag is specified, make continues to consider the other prerequisites of the pending targets, remaking them if necessary, before it gives up and returns nonzero status. For example, after an error in compiling one object file, make -k
will continue compiling other object files even though it already knows that linking them will be impossible. See section Summary of Options.
The usual behavior assumes that your purpose is to get the specified targets up to date; once make learns that this is impossible, it might as well report the failure immediately. The -k
option says that the real purpose is to test as many of the changes made in the program as possible, perhaps to find several independent problems so that you can correct them all before the next attempt to compile. This is why Emacs’ compile command passes the -k
flag by default.
Usually when a command fails, if it has changed the target file at all, the file is corrupted and cannot be used–or at least it is not completely updated. Yet the file’s timestamp says that it is now up to date, so the next time make runs, it will not try to update that file. The situation is just the same as when the command is killed by a signal; see section Interrupting or Killing make. So generally the right thing to do is to delete the target file if the command fails after beginning to change the file. make will do this if .DELETE_ON_ERROR appears as a target. This is almost always what you want make to do, but it is not historical practice; so for compatibility, you must explicitly request it.
Interrupting or Killing make
If make gets a fatal signal while a command is executing, it may delete the target file that the command was supposed to update. This is done if the target file’s last-modification time has changed since make first checked it.
The purpose of deleting the target is to make sure that it is remade from scratch when make is next run. Why is this? Suppose you type Ctrl-c
while a compiler is running, and it has begun to write an object file foo.o
. The Ctrl-c
kills the compiler, resulting in an incomplete file whose last-modification time is newer than the source file foo.c
. But make also receives the Ctrl-c
signal and deletes this incomplete file. If make did not do this, the next invocation of make would think that foo.o
did not require updating–resulting in a strange error message from the linker when it tries to link an object file half of which is missing.
You can prevent the deletion of a target file in this way by making the special target .PRECIOUS depend on it. Before remaking a target, make checks to see whether it appears on the prerequisites of .PRECIOUS, and thereby decides whether the target should be deleted if a signal happens. Some reasons why you might do this are that the target is updated in some atomic fashion, or exists only to record a modification-time (its contents do not matter), or must exist at all times to prevent other sorts of trouble.
Recursive Use of make
Recursive use of make means using make as a command in a makefile. This technique is useful when you want separate makefiles for various subsystems that compose a larger system. For example, suppose you have a subdirectory subdir
which has its own makefile, and you would like the containing directory’s makefile to run make on the subdirectory. You can do it by writing this:
subsystem: cd subdir && $(MAKE)
or, equivalently, this (see section Summary of Options):
subsystem: $(MAKE) -C subdir
You can write recursive make commands just by copying this example, but there are many things to know about how they work and why, and about how the sub-make relates to the top-level make.
For your convenience, GNU make sets the variable CURDIR to the pathname of the current working directory for you. If -C is in effect, it will contain the path of the new directory, not the original. The value has the same precedence it would have if it were set in the makefile (by default, an environment variable CURDIR will not override this value). Note that setting this variable has no effect on the operation of make
How the MAKE Variable Works
Recursive make commands should always use the variable MAKE, not the explicit command name make
, as shown here:
subsystem: cd subdir && $(MAKE)
The value of this variable is the file name with which make was invoked. If this file name was /bin/make
, then the command executed is cd subdir && /bin/make
. If you use a special version of make to run the top-level makefile, the same special version will be executed for recursive invocations.
As a special feature, using the variable MAKE in the commands of a rule alters the effects of the -t
(--touch
), -n
(--just-print
), or -q
(--question
) option. Using the MAKE variable has the same effect as using a +
character at the beginning of the command line. See section Instead of Executing the Commands.
Consider the command make -t
in the above example. (The -t
option marks targets as up to date without actually running any commands; see section Instead of Executing the Commands.) Following the usual definition of -t
, a make -t
command in the example would create a file named subsystem
and do nothing else. What you really want it to do is run cd subdir && make -t
; but that would require executing the command, and -t
says not to execute commands.
The special feature makes this do what you want: whenever a command line of a rule contains the variable MAKE, the flags -t
, -n
and -q
do not apply to that line. Command lines containing MAKE are executed normally despite the presence of a flag that causes most commands not to be run. The usual MAKEFLAGS mechanism passes the flags to the sub-make (see section Communicating Options to a Sub-make), so your request to touch the files, or print the commands, is propagated to the subsystem.
Communicating Variables to a Sub-make
Variable values of the top-level make can be passed to the sub-make through the environment by explicit request. These variables are defined in the sub-make as defaults, but do not override what is specified in the makefile used by the sub-make makefile unless you use the -e
switch (see section Summary of Options).
To pass down, or export, a variable, make adds the variable and its value to the environment for running each command. The sub-make, in turn, uses the environment to initialize its table of variable values. See section Variables from the Environment.
Except by explicit request, make exports a variable only if it is either defined in the environment initially or set on the command line, and if its name consists only of letters, numbers, and underscores. Some shells cannot cope with environment variable names consisting of characters other than letters, numbers, and underscores.
The special variables SHELL and MAKEFLAGS are always exported (unless you unexport them). MAKEFILES is exported if you set it to anything.
make automatically passes down variable values that were defined on the command line, by putting them in the MAKEFLAGS variable. See the next section.
Variables are not normally passed down if they were created by default by make (see section Variables Used by Implicit Rules). The sub-make will define these for itself.
If you want to export specific variables to a sub-make, use the export directive, like this:
export variable …
If you want to prevent a variable from being exported, use the unexport directive, like this:
unexport variable …
As a convenience, you can define a variable and export it at the same time by doing:
export variable = value
has the same result as:
variable = value export variable
and
export variable := value
has the same result as:
variable := value export variable
Likewise,
export variable += value
is just like:
variable += value export variable
See section Appending More Text to Variables.
You may notice that the export and unexport directives work in make in the same way they work in the shell, sh.
If you want all variables to be exported by default, you can use export by itself:
export
This tells make that variables which are not explicitly mentioned in an export or unexport directive should be exported. Any variable given in an unexport directive will still not be exported. If you use export by itself to export variables by default, variables whose names contain characters other than alphanumerics and underscores will not be exported unless specifically mentioned in an export directive.
The behavior elicited by an export directive by itself was the default in older versions of GNU make. If your makefiles depend on this behavior and you want to be compatible with old versions of make, you can write a rule for the special target .EXPORT_ALL_VARIABLES instead of using the export directive. This will be ignored by old makes, while the export directive will cause a syntax error.
Likewise, you can use unexport by itself to tell make not to export variables by default. Since this is the default behavior, you would only need to do this if export had been used by itself earlier (in an included makefile, perhaps). You cannot use export and unexport by themselves to have variables exported for some commands and not for others. The last export or unexport directive that appears by itself determines the behavior for the entire run of make.
As a special feature, the variable MAKELEVEL is changed when it is passed down from level to level. This variable’s value is a string which is the depth of the level as a decimal number. The value is 0
for the top-level make; 1
for a sub-make, 2
for a sub-sub-make, and so on. The incrementation happens when make sets up the environment for a command.
The main use of MAKELEVEL is to test it in a conditional directive (see section Conditional Parts of Makefiles); this way you can write a makefile that behaves one way if run recursively and another way if run directly by you.
You can use the variable MAKEFILES to cause all sub-make commands to use additional makefiles. The value of MAKEFILES is a whitespace-separated list of file names. This variable, if defined in the outer-level makefile, is passed down through the environment; then it serves as a list of extra makefiles for the sub-make to read before the usual or specified ones. See section The Variable MAKEFILES.
Communicating Options to a Sub-make
Flags such as -s
and -k
are passed automatically to the sub-make through the variable MAKEFLAGS. This variable is set up automatically by make to contain the flag letters that make received. Thus, if you do make -ks
then MAKEFLAGS gets the value ks
.
As a consequence, every sub-make gets a value for MAKEFLAGS in its environment. In response, it takes the flags from that value and processes them as if they had been given as arguments. See section Summary of Options.
Likewise variables defined on the command line are passed to the sub-make through MAKEFLAGS. Words in the value of MAKEFLAGS that contain =
, make treats as variable definitions just as if they appeared on the command line. See section Overriding Variables.
The options -C
, -f
, -o
, and -W
are not put into MAKEFLAGS; these options are not passed down.
The -j
option is a special case (see section Parallel Execution). If you set it to some numeric value N
and your operating system supports it (most any UNIX system will; others typically won’t), the parent make and all the sub-makes will communicate to ensure that there are only N
jobs running at the same time between them all. Note that any job that is marked recursive (see section Instead of Executing the Commands) doesn’t count against the total jobs (otherwise we could get N
sub-makes running and have no slots left over for any real work!)
If your operating system doesn’t support the above communication, then -j 1
is always put into MAKEFLAGS instead of the value you specified. This is because if the -j
option were passed down to sub-makes, you would get many more jobs running in parallel than you asked for. If you give -j
with no numeric argument, meaning to run as many jobs as possible in parallel, this is passed down, since multiple infinities are no more than one.
If you do not want to pass the other flags down, you must change the value of MAKEFLAGS, like this:
subsystem: cd subdir && $(MAKE) MAKEFLAGS=
The command line variable definitions really appear in the variable MAKEOVERRIDES, and MAKEFLAGS contains a reference to this variable. If you do want to pass flags down normally, but don’t want to pass down the command line variable definitions, you can reset MAKEOVERRIDES to empty, like this:
MAKEOVERRIDES =
This is not usually useful to do. However, some systems have a small fixed limit on the size of the environment, and putting so much information into the value of MAKEFLAGS can exceed it. If you see the error message Arg list too long
, this may be the problem.
(For strict compliance with POSIX.2, changing MAKEOVERRIDES does not affect MAKEFLAGS if the special target .POSIX
appears in the makefile. You probably do not care about this.)
A similar variable MFLAGS exists also, for historical compatibility. It has the same value as MAKEFLAGS except that it does not contain the command line variable definitions, and it always begins with a hyphen unless it is empty (MAKEFLAGS begins with a hyphen only when it begins with an option that has no single-letter version, such as --warn-undefined-variables
). MFLAGS was traditionally used explicitly in the recursive make command, like this:
subsystem: cd subdir && $(MAKE) $(MFLAGS)
but now MAKEFLAGS makes this usage redundant. If you want your makefiles to be compatible with old make programs, use this technique; it will work fine with more modern make versions too.
The MAKEFLAGS variable can also be useful if you want to have certain options, such as -k
(see section Summary of Options), set each time you run make. You simply put a value for MAKEFLAGS in your environment. You can also set MAKEFLAGS in a makefile, to specify additional flags that should also be in effect for that makefile. (Note that you cannot use MFLAGS this way. That variable is set only for compatibility; make does not interpret a value you set for it in any way.)
When make interprets the value of MAKEFLAGS (either from the environment or from a makefile), it first prepends a hyphen if the value does not already begin with one. Then it chops the value into words separated by blanks, and parses these words as if they were options given on the command line (except that -C
, -f
, -h
, -o
, -W
, and their long-named versions are ignored; and there is no error for an invalid option).
If you do put MAKEFLAGS in your environment, you should be sure not to include any options that will drastically affect the actions of make and undermine the purpose of makefiles and of make itself. For instance, the -t
, -n
, and -q
options, if put in one of these variables, could have disastrous consequences and would certainly have at least surprising and probably annoying effects.
The `--print-directory` Option
If you use several levels of recursive make invocations, the -w
or --print-directory
option can make the output a lot easier to understand by showing each directory as make starts processing it and as make finishes processing it. For example, if make -w
is run in the directory /u/gnu/make
, make will print a line of the form:
make: Entering directory `/u/gnu/make'.
before doing anything else, and a line of the form:
make: Leaving directory `/u/gnu/make'.
when processing is completed.
Normally, you do not need to specify this option because make
does it for you: -w
is turned on automatically when you use the -C
option, and in sub-makes. make will not automatically turn on -w
if you also use -s
, which says to be silent, or if you use --no-print-directory
to explicitly disable it.
Defining Canned Command Sequences
When the same sequence of commands is useful in making various targets, you can define it as a canned sequence with the define directive, and refer to the canned sequence from the rules for those targets. The canned sequence is actually a variable, so the name must not conflict with other variable names.
Here is an example of defining a canned sequence of commands:
define run-yaccyacc $(firstword $^)mv y.tab.c $@endef
Here run-yacc is the name of the variable being defined; endef marks the end of the definition; the lines in between are the commands. The define directive does not expand variable references and function calls in the canned sequence; the $
characters, parentheses, variable names, and so on, all become part of the value of the variable you are defining. See section Defining Variables Verbatim, for a complete explanation of define.
The first command in this example runs Yacc on the first prerequisite of whichever rule uses the canned sequence. The output file from Yacc is always named y.tab.c
. The second command moves the output to the rule’s target file name.
To use the canned sequence, substitute the variable into the commands of a rule. You can substitute it like any other variable (see section Basics of Variable References). Because variables defined by define are recursively expanded variables, all the variable references you wrote inside the define are expanded now. For example:
foo.c : foo.y $(run-yacc)
foo.y
will be substituted for the variable $^
when it occurs in run-yacc’s value, and foo.c
for $@
.
This is a realistic example, but this particular one is not needed in practice because make has an implicit rule to figure out these commands based on the file names involved (see section Using Implicit Rules).
In command execution, each line of a canned sequence is treated just as if the line appeared on its own in the rule, preceded by a tab. In particular, make invokes a separate subshell for each line. You can use the special prefix characters that affect command lines (@
, -
, and +
) on each line of a canned sequence. See section Writing the Commands in Rules. For example, using this canned sequence:
define frobnicate@echo “frobnicating target $@“frob-step-1 $< -o $@-step-1 frob-step-2 $@-step-1 -o $@endef
make will not echo the first line, the echo command. But it will echo the following two command lines.
On the other hand, prefix characters on the command line that refers to a canned sequence apply to every line in the sequence. So the rule:
frob.out: frob.in @$(frobnicate)
does not echo any commands. (See section Command Echoing, for a full explanation of @
.)
Using Empty Commands
It is sometimes useful to define commands which do nothing. This is done simply by giving a command that consists of nothing but whitespace. For example:
target: ;
defines an empty command string for target
. You could also use a line beginning with a tab character to define an empty command string, but this would be confusing because such a line looks empty.
You may be wondering why you would want to define a command string that does nothing. The only reason this is useful is to prevent a target from getting implicit commands (from implicit rules or the .DEFAULT special target; see section Using Implicit Rulesand see section Defining Last-Resort Default Rules).
You may be inclined to define empty command strings for targets that are not actual files, but only exist so that their prerequisites can be remade. However, this is not the best way to do that, because the prerequisites may not be remade properly if the target file actually does exist. See section Phony Targets, for a better way to do this.
How to Use Variables
A variable is a name defined in a makefile to represent a string of text, called the variable’s value. These values are substituted by explicit request into targets, prerequisites, commands, and other parts of the makefile. (In some other versions of make, variables are called macros.)
Variables and functions in all parts of a makefile are expanded when read, except for the shell commands in rules, the right-hand sides of variable definitions using =
, and the bodies of variable definitions using the define directive.
Variables can represent lists of file names, options to pass to compilers, programs to run, directories to look in for source files, directories to write output in, or anything else you can imagine.
A variable name may be any sequence of characters not containing :
, #
, =
, or leading or trailing whitespace. However, variable names containing characters other than letters, numbers, and underscores should be avoided, as they may be given special meanings in the future, and with some shells they cannot be passed through the environment to a sub-make (see section Communicating Variables to a Sub-make).
Variable names are case-sensitive. The names foo
, FOO
, and Foo
all refer to different variables.
It is traditional to use upper case letters in variable names, but we recommend using lower case letters for variable names that serve internal purposes in the makefile, and reserving upper case for parameters that control implicit rules or for parameters that the user should override with command options (see section Overriding Variables).
A few variables have names that are a single punctuation character or just a few characters. These are the automatic variables, and they have particular specialized uses. See section Automatic Variables.
Basics of Variable References
To substitute a variable’s value, write a dollar sign followed by the name of the variable in parentheses or braces: either $(foo)
or ${foo}
is a valid reference to the variable foo. This special significance of $
is why you must write $
to have the effect of a single dollar sign in a file name or command.
Variable references can be used in any context: targets, prerequisites, commands, most directives, and new variable values. Here is an example of a common case, where a variable holds the names of all the object files in a program:
objects = program.o foo.o utils.oprogram : $(objects) cc -o program $(objects)$(objects) : defs.h
Variable references work by strict textual substitution. Thus, the rule
foo = cprog.o : prog.$(foo) $(foo)$(foo) -$(foo) prog.$(foo)
could be used to compile a C program prog.c
. Since spaces before the variable value are ignored in variable assignments, the value of foo is precisely c
. (Don’t actually write your makefiles this way!)
A dollar sign followed by a character other than a dollar sign, open-parenthesis or open-brace treats that single character as the variable name. Thus, you could reference the variable x with $x
. However, this practice is strongly discouraged, except in the case of the automatic variables (see section Automatic Variables).
The Two Flavors of Variables
There are two ways that a variable in GNU make can have a value; we call them the two flavors of variables. The two flavors are distinguished in how they are defined and in what they do when expanded.
The first flavor of variable is a recursively expanded variable. Variables of this sort are defined by lines using =
(see section Setting Variables) or by the define directive (see section Defining Variables Verbatim). The value you specify is installed verbatim; if it contains references to other variables, these references are expanded whenever this variable is substituted (in the course of expanding some other string). When this happens, it is called recursive expansion.
For example,
foo = $(bar)bar = $(ugh)ugh = Huh?all:;echo $(foo)
will echo Huh?
: $(foo)
expands to $(bar)
which expands to $(ugh)
which finally expands to Huh?
.
This flavor of variable is the only sort supported by other versions of make. It has its advantages and its disadvantages. An advantage (most would say) is that:
CFLAGS = $(include_dirs) -Oinclude_dirs = -Ifoo -Ibar
will do what was intended: when CFLAGS
is expanded in a command, it will expand to -Ifoo -Ibar -O
. A major disadvantage is that you cannot append something on the end of a variable, as in
CFLAGS = $(CFLAGS) -O
because it will cause an infinite loop in the variable expansion. (Actually make detects the infinite loop and reports an error.)
Another disadvantage is that any functions (see section Functions for Transforming Text) referenced in the definition will be executed every time the variable is expanded. This makes make run slower; worse, it causes the wildcard and shell functions to give unpredictable results because you cannot easily control when they are called, or even how many times.
To avoid all the problems and inconveniences of recursively expanded variables, there is another flavor: simply expanded variables.
Simply expanded variables are defined by lines using :=
(see section Setting Variables). The value of a simply expanded variable is scanned once and for all, expanding any references to other variables and functions, when the variable is defined. The actual value of the simply expanded variable is the result of expanding the text that you write. It does not contain any references to other variables; it contains their values as of the time this variable was defined. Therefore,
x := fooy := $(x) barx := later
is equivalent to
y := foo barx := later
When a simply expanded variable is referenced, its value is substituted verbatim.
Here is a somewhat more complicated example, illustrating the use of :=
in conjunction with the shell function. (See section The shell Function.) This example also shows use of the variable MAKELEVEL, which is changed when it is passed down from level to level. (See section Communicating Variables to a Sub-make, for information about MAKELEVEL.)
ifeq (0,${MAKELEVEL})cur-dir := $(shell pwd)whoami := $(shell whoami) host-type := $(shell arch) MAKE := ${MAKE} host-type=${host-type} whoami=${whoami}endif
An advantage of this use of :=
is that a typical `descend into a directory’ command then looks like this:
${subdirs}: ${MAKE} cur-dir=${cur-dir}/$@ -C $@ all
Simply expanded variables generally make complicated makefile programming more predictable because they work like variables in most programming languages. They allow you to redefine a variable using its own value (or its value processed in some way by one of the expansion functions) and to use the expansion functions much more efficiently (see section Functions for Transforming Text).
You can also use them to introduce controlled leading whitespace into variable values. Leading whitespace characters are discarded from your input before substitution of variable references and function calls; this means you can include leading spaces in a variable value by protecting them with variable references, like this:
nullstring :=space := $(nullstring) # end of the line
Here the value of the variable space is precisely one space. The comment # end of the line
is included here just for clarity. Since trailing space characters are not stripped from variable values, just a space at the end of the line would have the same effect (but be rather hard to read). If you put whitespace at the end of a variable value, it is a good idea to put a comment like that at the end of the line to make your intent clear. Conversely, if you do not want any whitespace characters at the end of your variable value, you must remember not to put a random comment on the end of the line after some whitespace, such as this:
dir := /foo/bar # directory to put the frobs in
Here the value of the variable dir is /foo/bar
(with four trailing spaces), which was probably not the intention. (Imagine something like $(dir)/file
with this definition!)
There is another assignment operator for variables, ?=
. This is called a conditional variable assignment operator, because it only has an effect if the variable is not yet defined. This statement:
FOO ?= bar
is exactly equivalent to this (see section The origin Function):
ifeq ($(origin FOO), undefined) FOO = barendif
Note that a variable set to an empty value is still defined, so ?=
will not set that variable.
Advanced Features for Reference to Variables
This section describes some advanced features you can use to reference variables in more flexible ways.
Substitution References
A substitution reference substitutes the value of a variable with alterations that you specify. It has the form $(var:a\=b)
(or ${var:a\=b}
) and its meaning is to take the value of the variable var, replace every a at the end of a word with b in that value, and substitute the resulting string.
When we say “at the end of a word”, we mean that a must appear either followed by whitespace or at the end of the value in order to be replaced; other occurrences of a in the value are unaltered. For example:
foo := a.o b.o c.obar := $(foo:.o=.c)
sets bar
to a.c b.c c.c
. See section Setting Variables.
A substitution reference is actually an abbreviation for use of the patsubst expansion function (see section Functions for String Substitution and Analysis). We provide substitution references as well as patsubst for compatibility with other implementations of make.
Another type of substitution reference lets you use the full power of the patsubst function. It has the same form $(var:a\=b)
described above, except that now a must contain a single %
character. This case is equivalent to $(patsubst a,b,$(var))
. See section Functions for String Substitution and Analysis, for a description of the patsubst function.
For example:foo := a.o b.o c.obar := $(foo:%.o=%.c)
sets bar
to a.c b.c c.c
.
Computed Variable Names
Computed variable names are a complicated concept needed only for sophisticated makefile programming. For most purposes you need not consider them, except to know that making a variable with a dollar sign in its name might have strange results. However, if you are the type that wants to understand everything, or you are actually interested in what they do, read on.
Variables may be referenced inside the name of a variable. This is called a computed variable name or a nested variable reference. For example,
x = yy = za := $($(x))
defines a as z
: the $(x)
inside $($(x))
expands to y
, so $($(x))
expands to $(y)
which in turn expands to z
. Here the name of the variable to reference is not stated explicitly; it is computed by expansion of $(x)
. The reference $(x)
here is nested within the outer variable reference.
The previous example shows two levels of nesting, but any number of levels is possible. For example, here are three levels:
x = yy = zz = ua := $($($(x)))
Here the innermost $(x)
expands to y
, so $($(x))
expands to $(y)
which in turn expands to z
; now we have $(z)
, which becomes u
.
References to recursively-expanded variables within a variable name are reexpanded in the usual fashion. For example:
x = $(y)y = zz = Helloa := $($(x))
defines a as Hello
: $($(x))
becomes $($(y))
which becomes $(z)
which becomes Hello
.
Nested variable references can also contain modified references and function invocations (see section Functions for Transforming Text), just like any other reference. For example, using the subst function (see section Functions for String Substitution and Analysis):
x = variable1variable2 := Helloy = $(subst 1,2,$(x))z = ya := $($($(z)))
eventually defines a as Hello
. It is doubtful that anyone would ever want to write a nested reference as convoluted as this one, but it works: $($($(z)))
expands to $($(y))
which becomes $($(subst 1,2,$(x)))
. This gets the value variable1
from x and changes it by substitution to variable2
, so that the entire string becomes $(variable2)
, a simple variable reference whose value is Hello
.
A computed variable name need not consist entirely of a single variable reference. It can contain several variable references, as well as some invariant text. For example,
a_dirs := dira dirb1_dirs := dir1 dir2a_files := filea fileb 1_files := file1 file2ifeq “$(use_a)” “yes"a1 := aelsea1 := 1endif ifeq “$(use_dirs)” “yes"df := dirselsedf := filesendifdirs := $($(a1)_$(df))
will give dirs the same value as a_dirs, 1_dirs, a_files or 1_files depending on the settings of use_a and use_dirs.
Computed variable names can also be used in substitution references:
a_objects := a.o b.o c.o1_objects := 1.o 2.o 3.o sources := $($(a1)_objects:.o=.c)
defines sources as either a.c b.c c.c
or 1.c 2.c 3.c
, depending on the value of a1.
The only restriction on this sort of use of nested variable references is that they cannot specify part of the name of a function to be called. This is because the test for a recognized function name is done before the expansion of nested references. For example,
ifdef do_sortfunc := sortelsefunc := stripendifbar := a d b g q c foo := $($(func) $(bar))
attempts to give foo
the value of the variable sort a d b g q c
or strip a d b g q c
, rather than giving a d b g q c
as the argument to either the sort or the strip function. This restriction could be removed in the future if that change is shown to be a good idea.
You can also use computed variable names in the left-hand side of a variable assignment, or in a define directive, as in:
dir = foo$(dir)_sources := $(wildcard $(dir)/*.c)define $(dir)_print lpr $($(dir)_sources)endef
This example defines the variables dir
, foo\_sources
, and foo\_print
.
Note that nested variable references are quite different from recursively expanded variables (see section The Two Flavors of Variables), though both are used together in complex ways when doing makefile programming.
How Variables Get Their Values
Variables can get values in several different ways:
- You can specify an overriding value when you run make. See section Overriding Variables.
- You can specify a value in the makefile, either with an assignment (see section Setting Variables) or with a verbatim definition (see section Defining Variables Verbatim).
- Variables in the environment become make variables. See section Variables from the Environment.
- Several automatic variables are given new values for each rule. Each of these has a single conventional use. See section Automatic Variables.
- Several variables have constant initial values. See section Variables Used by Implicit Rules.
Setting Variables
To set a variable from the makefile, write a line starting with the variable name followed by =
or :=
. Whatever follows the =
or :=
on the line becomes the value. For example,
objects = main.o foo.o bar.o utils.o
defines a variable named objects. Whitespace around the variable name and immediately after the =
is ignored.
Variables defined with =
are recursively expanded variables. Variables defined with :=
are simply expanded variables; these definitions can contain variable references which will be expanded before the definition is made. See section The Two Flavors of Variables.
The variable name may contain function and variable references, which are expanded when the line is read to find the actual variable name to use.
There is no limit on the length of the value of a variable except the amount of swapping space on the computer. When a variable definition is long, it is a good idea to break it into several lines by inserting backslash-newline at convenient places in the definition. This will not affect the functioning of make, but it will make the makefile easier to read.
Most variable names are considered to have the empty string as a value if you have never set them. Several variables have built-in initial values that are not empty, but you can set them in the usual ways (see section Variables Used by Implicit Rules). Several special variables are set automatically to a new value for each rule; these are called the automatic variables (see section Automatic Variables).
If you’d like a variable to be set to a value only if it’s not already set, then you can use the shorthand operator ?=
instead of =
. These two settings of the variable FOO
are identical (see section The origin Function):
FOO ?= bar
and
ifeq ($(origin FOO), undefined)FOO = barendif
Appending More Text to Variables
Often it is useful to add more text to the value of a variable already defined. You do this with a line containing +=
, like this:
objects += another.o
This takes the value of the variable objects, and adds the text another.o
to it (preceded by a single space). Thus:
objects = main.o foo.o bar.o utils.oobjects += another.o
sets objects to main.o foo.o bar.o utils.o another.o
.
Using +=
is similar to:
objects = main.o foo.o bar.o utils.oobjects := $(objects) another.o
but differs in ways that become important when you use more complex values.
When the variable in question has not been defined before, +=
acts just like normal =
: it defines a recursively-expanded variable. However, when there is a previous definition, exactly what +=
does depends on what flavor of variable you defined originally. See section The Two Flavors of Variables, for an explanation of the two flavors of variables.
When you add to a variable’s value with +=
, make acts essentially as if you had included the extra text in the initial definition of the variable. If you defined it first with :=
, making it a simply-expanded variable, +=
adds to that simply-expanded definition, and expands the new text before appending it to the old value just as :=
does (see section Setting Variables, for a full explanation of :=
). In fact,
variable := valuevariable += more
is exactly equivalent to:
variable := valuevariable := $(variable) more
On the other hand, when you use +=
with a variable that you defined first to be recursively-expanded using plain =
, make does something a bit different. Recall that when you define a recursively-expanded variable, make does not expand the value you set for variable and function references immediately. Instead it stores the text verbatim, and saves these variable and function references to be expanded later, when you refer to the new variable (see section The Two Flavors of Variables). When you use +=
on a recursively-expanded variable, it is this unexpanded text to which make appends the new text you specify.
variable = valuevariable += more
is roughly equivalent to:
temp = valuevariable = $(temp) more
except that of course it never defines a variable called temp. The importance of this comes when the variable’s old value contains variable references. Take this common example:
CFLAGS = $(includes) -O…CFLAGS += -pg # enable profiling
The first line defines the CFLAGS variable with a reference to another variable, includes. (CFLAGS is used by the rules for C compilation; see section Catalogue of Implicit Rules.) Using =
for the definition makes CFLAGS a recursively-expanded variable, meaning $(includes) -O
is not expanded when make processes the definition of CFLAGS. Thus, includes need not be defined yet for its value to take effect. It only has to be defined before any reference to CFLAGS. If we tried to append to the value of CFLAGS without using +=
, we might do it like this:
CFLAGS := $(CFLAGS) -pg # enable profiling
This is pretty close, but not quite what we want. Using :=
redefines CFLAGS as a simply-expanded variable; this means make expands the text $(CFLAGS) -pg
before setting the variable. If includes is not yet defined, we get -O -pg
, and a later definition of includes will have no effect. Conversely, by using +=
we set CFLAGS to the unexpanded value $(includes) -O -pg
. Thus we preserve the reference to includes, so if that variable gets defined at any later point, a reference like $(CFLAGS)
still uses its value.
The override Directive
If a variable has been set with a command argument (see section Overriding Variables), then ordinary assignments in the makefile are ignored. If you want to set the variable in the makefile even though it was set with a command argument, you can use an override directive, which is a line that looks like this:
override variable = value
or
override variable := value
To append more text to a variable defined on the command line, use:
override variable += more text
See section Appending More Text to Variables.
The override directive was not invented for escalation in the war between makefiles and command arguments. It was invented so you can alter and add to values that the user specifies with command arguments.
For example, suppose you always want the -g
switch when you run the C compiler, but you would like to allow the user to specify the other switches with a command argument just as usual. You could use this override directive:
override CFLAGS += -g
You can also use override directives with define directives. This is done as you might expect:
override define foobarendef
See the next section for information about define.
Defining Variables Verbatim
Another way to set the value of a variable is to use the define directive. This directive has an unusual syntax which allows newline characters to be included in the value, which is convenient for defining canned sequences of commands (see section Defining Canned Command Sequences).
The define directive is followed on the same line by the name of the variable and nothing more. The value to give the variable appears on the following lines. The end of the value is marked by a line containing just the word endef. Aside from this difference in syntax, define works just like =
: it creates a recursively-expanded variable (see section The Two Flavors of Variables). The variable name may contain function and variable references, which are expanded when the directive is read to find the actual variable name to use.
define two-linesecho fooecho $(bar)endef
The value in an ordinary assignment cannot contain a newline; but the newlines that separate the lines of the value in a define become part of the variable’s value (except for the final newline which precedes the endef and is not considered part of the value).
The previous example is functionally equivalent to this:
two-lines = echo foo; echo $(bar)
since two commands separated by semicolon behave much like two separate shell commands. However, note that using two separate lines means make will invoke the shell twice, running an independent subshell for each line. See section Command Execution.
If you want variable definitions made with define to take precedence over command-line variable definitions, you can use the override directive together with define:
override define two-linesfoo$(bar)endef
See section The override Directive.
Variables from the Environment
Variables in make can come from the environment in which make is run. Every environment variable that make sees when it starts up is transformed into a make variable with the same name and value. But an explicit assignment in the makefile, or with a command argument, overrides the environment. (If the -e
flag is specified, then values from the environment override assignments in the makefile. See section Summary of Options. But this is not recommended practice.)
Thus, by setting the variable CFLAGS in your environment, you can cause all C compilations in most makefiles to use the compiler switches you prefer. This is safe for variables with standard or conventional meanings because you know that no makefile will use them for other things. (But this is not totally reliable; some makefiles set CFLAGS explicitly and therefore are not affected by the value in the environment.)
When make is invoked recursively, variables defined in the outer invocation can be passed to inner invocations through the environment (see section Recursive Use of make). By default, only variables that came from the environment or the command line are passed to recursive invocations. You can use the export directive to pass other variables. See section Communicating Variables to a Sub-make, for full details.
Other use of variables from the environment is not recommended. It is not wise for makefiles to depend for their functioning on environment variables set up outside their control, since this would cause different users to get different results from the same makefile. This is against the whole purpose of most makefiles.
Such problems would be especially likely with the variable SHELL, which is normally present in the environment to specify the user’s choice of interactive shell. It would be very undesirable for this choice to affect make. So make ignores the environment value of SHELL (except on MS-DOS and MS-Windows, where SHELL is usually not set. See section Command Execution.)
Target-specific Variable Values
Variable values in make are usually global; that is, they are the same regardless of where they are evaluated (unless they’re reset, of course). One exception to that is automatic variables (see section Automatic Variables).
The other exception is target-specific variable values. This feature allows you to define different values for the same variable, based on the target that make is currently building. As with automatic variables, these values are only available within the context of a target’s command script (and in other target-specific assignments).
Set a target-specific variable value like this:
target … : variable-assignment
or like this:
target … : override variable-assignment
Multiple target values create a target-specific variable value for each member of the target list individually.
The variable-assignment can be any valid form of assignment; recursive (=
), static (:=
), appending (+=
), or conditional (?=
). All variables that appear within the variable-assignment are evaluated within the context of the target: thus, any previously-defined target-specific variable values will be in effect. Note that this variable is actually distinct from any “global” value: the two variables do not have to have the same flavor (recursive vs. static).
Target-specific variables have the same priority as any other makefile variable. Variables provided on the command-line (and in the environment if the -e
option is in force) will take precedence. Specifying the override directive will allow the target-specific variable value to be preferred.
There is one more special feature of target-specific variables: when you define a target-specific variable, that variable value is also in effect for all prerequisites of this target (unless those prerequisites override it with their own target-specific variable value). So, for example, a statement like this:
prog : CFLAGS = -gprog : prog.o foo.o bar.o
will set CFLAGS to -g
in the command script for prog
, but it will also set CFLAGS to -g
in the command scripts that create prog.o
, foo.o
, and bar.o
, and any command scripts which create their prerequisites.
Pattern-specific Variable Values
In addition to target-specific variable values (see section Target-specific Variable Values), GNU make supports pattern-specific variable values. In this form, a variable is defined for any target that matches the pattern specified. Variables defined in this way are searched after any target-specific variables defined explicitly for that target, and before target-specific variables defined for the parent target.
Set a pattern-specific variable value like this:
pattern … : variable-assignment
or like this:
pattern … : override variable-assignment
where pattern is a %-pattern. As with target-specific variable values, multiple pattern values create a pattern-specific variable value for each pattern individually. The variable-assignment can be any valid form of assignment. Any command-line variable setting will take precedence, unless override is specified.
For example:
%.o : CFLAGS = -O
will assign CFLAGS the value of -O
for all targets matching the pattern %.o.
Conditional Parts of Makefiles
A conditional causes part of a makefile to be obeyed or ignored depending on the values of variables. Conditionals can compare the value of one variable to another, or the value of a variable to a constant string. Conditionals control what make actually “sees” in the makefile, so they cannot be used to control shell commands at the time of execution.
Example of a Conditional
The following example of a conditional tells make to use one set of libraries if the CC variable is gcc
, and a different set of libraries otherwise. It works by controlling which of two command lines will be used as the command for a rule. The result is that CC=gcc
as an argument to make changes not only which compiler is used but also which libraries are linked.
libs_for_gcc = -lgnunormal_libs =foo: $(objects)ifeq ($(CC),gcc) $(CC) -o foo $(objects) $(libs_for_gcc)else $(CC) -o foo $(objects) $(normal_libs)endif
This conditional uses three directives: one ifeq, one else and one endif.
The ifeq directive begins the conditional, and specifies the condition. It contains two arguments, separated by a comma and surrounded by parentheses. Variable substitution is performed on both arguments and then they are compared. The lines of the makefile following the ifeq are obeyed if the two arguments match; otherwise they are ignored.
The else directive causes the following lines to be obeyed if the previous conditional failed. In the example above, this means that the second alternative linking command is used whenever the first alternative is not used. It is optional to have an else in a conditional.
The endif directive ends the conditional. Every conditional must end with an endif. Unconditional makefile text follows.
As this example illustrates, conditionals work at the textual level: the lines of the conditional are treated as part of the makefile, or ignored, according to the condition. This is why the larger syntactic units of the makefile, such as rules, may cross the beginning or the end of the conditional.
When the variable CC has the value gcc
, the above example has this effect:
foo: $(objects) $(CC) -o foo $(objects) $(libs_for_gcc)
When the variable CC has any other value, the effect is this:
foo: $(objects) $(CC) -o foo $(objects) $(normal_libs)
Equivalent results can be obtained in another way by conditionalizing a variable assignment and then using the variable unconditionally:
libs_for_gcc = -lgnunormal_libs =ifeq ($(CC),gcc) libs=$(libs_for_gcc)else libs=$(normal_libs)endiffoo: $(objects) $(CC) -o foo $(objects) $(libs)
Syntax of Conditionals
The syntax of a simple conditional with no else is as follows:
conditional-directive text-if-trueendif
The text-if-true may be any lines of text, to be considered as part of the makefile if the condition is true. If the condition is false, no text is used instead.
The syntax of a complex conditional is as follows:
conditional-directive text-if-trueelse text-if-falseendif
If the condition is true, text-if-true is used; otherwise, text-if-false is used instead. The text-if-false can be any number of lines of text.
The syntax of the conditional-directive is the same whether the conditional is simple or complex. There are four different directives that test different conditions. Here is a table of them:
ifeq (arg1, arg2)
ifeq ‘arg1’ ‘arg2’
ifeq “arg1” “arg2”
ifeq “arg1” ‘arg2’
ifeq ‘arg1’ “arg2”
Expand all variable references in arg1 and arg2 and compare them. If they are identical, the text-if-true is effective; otherwise, the text-if-false, if any, is effective. Often you want to test if a variable has a non-empty value. When the value results from complex expansions of variables and functions, expansions you would consider empty may actually contain whitespace characters and thus are not seen as empty. However, you can use the strip function (see section Functions for String Substitution and Analysis) to avoid interpreting whitespace as a non-empty value. For example:
ifeq ($(strip $(foo)),)text-if-emptyendif
will evaluate text-if-empty even if the expansion of $(foo) contains whitespace characters.
ifneq (arg1, arg2)
ifneq ‘arg1’ ‘arg2’
ifneq “arg1” “arg2”
ifneq “arg1” ‘arg2’
ifneq ‘arg1’ “arg2”
Expand all variable references in arg1 and arg2 and compare them. If they are different, the text-if-true is effective; otherwise, the text-if-false, if any, is effective.
ifdef variable-name
If the variable variable-name has a non-empty value, the text-if-true is effective; otherwise, the text-if-false, if any, is effective. Variables that have never been defined have an empty value. Note that ifdef only tests whether a variable has a value. It does not expand the variable to see if that value is nonempty. Consequently, tests using ifdef return true for all definitions except those like foo =. To test for an empty value, use ifeq ($(foo),). For example,
bar =foo = $(bar)ifdef foofrobozz = yeselsefrobozz = noendif
sets frobozz
to yes
, while:
foo =ifdef foofrobozz = yeselsefrobozz = noendif
sets frobozz
to no
.
ifndef variable-name
If the variable variable-name has an empty value, the text-if-true is effective; otherwise, the text-if-false, if any, is effective.
Extra spaces are allowed and ignored at the beginning of the conditional directive line, but a tab is not allowed. (If the line begins with a tab, it will be considered a command for a rule.) Aside from this, extra spaces or tabs may be inserted with no effect anywhere except within the directive name or within an argument. A comment starting with #
may appear at the end of the line.
The other two directives that play a part in a conditional are else and endif. Each of these directives is written as one word, with no arguments. Extra spaces are allowed and ignored at the beginning of the line, and spaces or tabs at the end. A comment starting with #
may appear at the end of the line.
Conditionals affect which lines of the makefile make uses. If the condition is true, make reads the lines of the text-if-true as part of the makefile; if the condition is false, make ignores those lines completely. It follows that syntactic units of the makefile, such as rules, may safely be split across the beginning or the end of the conditional.
make evaluates conditionals when it reads a makefile. Consequently, you cannot use automatic variables in the tests of conditionals because they are not defined until commands are run (see section Automatic Variables).
To prevent intolerable confusion, it is not permitted to start a conditional in one makefile and end it in another. However, you may write an include directive within a conditional, provided you do not attempt to terminate the conditional inside the included file.
Conditionals that Test Flags
You can write a conditional that tests make command flags such as -t
by using the variable MAKEFLAGS together with the findstring function (see section Functions for String Substitution and Analysis). This is useful when touch is not enough to make a file appear up to date.
The findstring function determines whether one string appears as a substring of another. If you want to test for the -t
flag, use t
as the first string and the value of MAKEFLAGS as the other.
For example, here is how to arrange to use ranlib -t
to finish marking an archive file up to date:
archive.a: …ifneq (,$(findstring t,$(MAKEFLAGS))) +touch archive.a +ranlib -t archive.aelse ranlib archive.aendif
The +
prefix marks those command lines as “recursive” so that they will be executed despite use of the -t
flag. See section Recursive Use of make.
Functions for Transforming Text
Functions allow you to do text processing in the makefile to compute the files to operate on or the commands to use. You use a function in a function call, where you give the name of the function and some text (the arguments) for the function to operate on. The result of the function’s processing is substituted into the makefile at the point of the call, just as a variable might be substituted.
Function Call Syntax
A function call resembles a variable reference. It looks like this:
$(function arguments)
or like this:
${function arguments}
Here function is a function name; one of a short list of names that are part of make. You can also essentially create your own functions by using the call builtin function.
The arguments are the arguments of the function. They are separated from the function name by one or more spaces or tabs, and if there is more than one argument, then they are separated by commas. Such whitespace and commas are not part of an argument’s value. The delimiters which you use to surround the function call, whether parentheses or braces, can appear in an argument only in matching pairs; the other kind of delimiters may appear singly. If the arguments themselves contain other function calls or variable references, it is wisest to use the same kind of delimiters for all the references; write $(subst a,b,$(x))
, not $(subst a,b,${x})
. This is because it is clearer, and because only one type of delimiter is matched to find the end of the reference.
The text written for each argument is processed by substitution of variables and function calls to produce the argument value, which is the text on which the function acts. The substitution is done in the order in which the arguments appear.
Commas and unmatched parentheses or braces cannot appear in the text of an argument as written; leading spaces cannot appear in the text of the first argument as written. These characters can be put into the argument value by variable substitution. First define variables comma and space whose values are isolated comma and space characters, then substitute these variables where such characters are wanted, like this:
comma:= ,empty:=space:= $(empty) $(empty)foo:= a b c bar:= $(subst $(space),$(comma),$(foo))# bar is now `a,b,c’.
Here the subst function replaces each space with a comma, through the value of foo, and substitutes the result.
Functions for String Substitution and Analysis
Here are some functions that operate on strings:
$(subst from,to,text)
Performs a textual replacement on the text text: each occurrence of from is replaced by to. The result is substituted for the function call. For example,
$(subst ee,EE,feet on the street)
substitutes the string fEEt on the strEEt
.
$(patsubst pattern,replacement,text)
Finds whitespace-separated words in text that match pattern and replaces them with replacement. Here pattern may contain a %
which acts as a wildcard, matching any number of any characters within a word. If replacement also contains a %
, the %
is replaced by the text that matched the %
in pattern.
%
characters in patsubst function invocations can be quoted with preceding backslashes (\`). Backslashes that would otherwise quote
%characters can be quoted with more backslashes. Backslashes that quote
%characters or other backslashes are removed from the pattern before it is compared file names or has a stem substituted into it. Backslashes that are not in danger of quoting
%characters go unmolested. For example, the pattern
the\%weird\\%pattern\` has the%weird\` preceding the operative
%character, and
pattern\` following it. The final two backslashes are left alone because they cannot affect any %
character. Whitespace between words is folded into single space characters; leading and trailing whitespace is discarded. For example,
$(patsubst %.c,%.o,x.c.c bar.c)
produces the value x.c.o bar.o
. Substitution references (see section Substitution References) are a simpler way to get the effect of the patsubst function:
$(var:pattern=replacement)
is equivalent to
$(patsubst pattern,replacement,$(var))
The second shorthand simplifies one of the most common uses of patsubst: replacing the suffix at the end of file names.
$(var:suffix=replacement)
is equivalent to
$(patsubst %suffix,%replacement,$(var))
For example, you might have a list of object files:
objects = foo.o bar.o baz.o
To get the list of corresponding source files, you could simply write:
$(objects:.o=.c)
instead of using the general form:
$(patsubst %.o,%.c,$(objects))
$(strip string)
Removes leading and trailing whitespace from string and replaces each internal sequence of one or more whitespace characters with a single space. Thus, $(strip a b c )
results in a b c
. The function strip can be very useful when used in conjunction with conditionals. When comparing something with the empty string `` using ifeq or ifneq, you usually want a string of just whitespace to match the empty string (see section Conditional Parts of Makefiles). Thus, the following may fail to have the desired results:
.PHONY: allifneq “$(needs_made)” ““all: $(needs_made)else all:;@echo ‘Nothing to make!’endif
Replacing the variable reference $(needs\_made)
with the function call $(strip $(needs\_made))
in the ifneq directive would make it more robust.
$(findstring find,in)
Searches in for an occurrence of find. If it occurs, the value is find; otherwise, the value is empty. You can use this function in a conditional to test for the presence of a specific substring in a given string. Thus, the two examples,
$(findstring a,a b c)$(findstring a,b c)
produce the values a
and `` (the empty string), respectively. See section Conditionals that Test Flags, for a practical application of findstring.
$(filter pattern…,text)
Returns all whitespace-separated words in text that do match any of the pattern words, removing any words that do not match. The patterns are written using %
, just like the patterns used in the patsubst function above. The filter function can be used to separate out different types of strings (such as file names) in a variable. For example:
sources := foo.c bar.c baz.s ugh.hfoo: $(sources) cc $(filter %.c %.s,$(sources)) -o foo
says that foo
depends of foo.c
, bar.c
, baz.s
and ugh.h
but only foo.c
, bar.c
and baz.s
should be specified in the command to the compiler.
$(filter-out pattern…,text)
Returns all whitespace-separated words in text that do not match any of the pattern words, removing the words that do match one or more. This is the exact opposite of the filter function. Removes all whitespace-separated words in text that do match the pattern words, returning only the words that do not match. This is the exact opposite of the filter function. For example, given:
objects=main1.o foo.o main2.o bar.omains=main1.o main2.o
the following generates a list which contains all the object files not in mains
:
$(filter-out $(mains),$(objects))
$(sort list)
Sorts the words of list in lexical order, removing duplicate words. The output is a list of words separated by single spaces. Thus,
$(sort foo bar lose)
returns the value bar foo lose
.
Incidentally, since sort removes duplicate words, you can use it for this purpose even if you don’t care about the sort order.
Here is a realistic example of the use of subst and patsubst. Suppose that a makefile uses the VPATH variable to specify a list of directories that make should search for prerequisite files (see section VPATH: Search Path for All Prerequisites). This example shows how to tell the C compiler to search for header files in the same list of directories.
The value of VPATH is a list of directories separated by colons, such as src:../headers
. First, the subst function is used to change the colons to spaces:
$(subst :, ,$(VPATH))
This produces src ../headers
. Then patsubst is used to turn each directory name into a -I
flag. These can be added to the value of the variable CFLAGS, which is passed automatically to the C compiler, like this:
override CFLAGS += $(patsubst %,-I%,$(subst :, ,$(VPATH)))
The effect is to append the text -Isrc -I../headers
to the previously given value of CFLAGS. The override directive is used so that the new value is assigned even if the previous value of CFLAGS was specified with a command argument (see section The override Directive).
Functions for File Names
Several of the built-in expansion functions relate specifically to taking apart file names or lists of file names.
Each of the following functions performs a specific transformation on a file name. The argument of the function is regarded as a series of file names, separated by whitespace. (Leading and trailing whitespace is ignored.) Each file name in the series is transformed in the same way and the results are concatenated with single spaces between them.
$(dir names…)
Extracts the directory-part of each file name in names. The directory-part of the file name is everything up through (and including) the last slash in it. If the file name contains no slash, the directory part is the string ./
. For example,
$(dir src/foo.c hacks)
produces the result src/ ./
.
$(notdir names…)
Extracts all but the directory-part of each file name in names. If the file name contains no slash, it is left unchanged. Otherwise, everything through the last slash is removed from it. A file name that ends with a slash becomes an empty string. This is unfortunate, because it means that the result does not always have the same number of whitespace-separated file names as the argument had; but we do not see any other valid alternative. For example,
$(notdir src/foo.c hacks)
produces the result foo.c hacks
.
$(suffix names…)
Extracts the suffix of each file name in names. If the file name contains a period, the suffix is everything starting with the last period. Otherwise, the suffix is the empty string. This frequently means that the result will be empty when names is not, and if names contains multiple file names, the result may contain fewer file names. For example,
$(suffix src/foo.c src-1.0/bar.c hacks)
produces the result .c .c
.
$(basename names…)
Extracts all but the suffix of each file name in names. If the file name contains a period, the basename is everything starting up to (and not including) the last period. Periods in the directory part are ignored. If there is no period, the basename is the entire file name. For example,
$(basename src/foo.c src-1.0/bar hacks)
produces the result src/foo src-1.0/bar hacks
.
$(addsuffix suffix,names…)
The argument names is regarded as a series of names, separated by whitespace; suffix is used as a unit. The value of suffix is appended to the end of each individual name and the resulting larger names are concatenated with single spaces between them. For example,
$(addsuffix .c,foo bar)
produces the result foo.c bar.c
.
$(addprefix prefix,names…)
The argument names is regarded as a series of names, separated by whitespace; prefix is used as a unit. The value of prefix is prepended to the front of each individual name and the resulting larger names are concatenated with single spaces between them. For example,
$(addprefix src/,foo bar)
produces the result src/foo src/bar
.
$(join list1,list2)
Concatenates the two arguments word by word: the two first words (one from each argument) concatenated form the first word of the result, the two second words form the second word of the result, and so on. So the nth word of the result comes from the nth word of each argument. If one argument has more words that the other, the extra words are copied unchanged into the result. For example, $(join a b,.c .o)
produces a.c b.o
. Whitespace between the words in the lists is not preserved; it is replaced with a single space. This function can merge the results of the dir and notdir functions, to produce the original list of files which was given to those two functions.
$(word n,text)
Returns the nth word of text. The legitimate values of n start from 1. If n is bigger than the number of words in text, the value is empty. For example,
$(word 2, foo bar baz)
returns bar
.
$(wordlist s,e,text)
Returns the list of words in text starting with word s and ending with word e (inclusive). The legitimate values of s and e start from 1. If s is bigger than the number of words in text, the value is empty. If e is bigger than the number of words in text, words up to the end of text are returned. If s is greater than e, nothing is returned. For example,
$(wordlist 2, 3, foo bar baz)
returns bar baz
.
$(words text)
Returns the number of words in text. Thus, the last word of text is $(word $(words text),text).
$(firstword names…)
The argument names is regarded as a series of names, separated by whitespace. The value is the first name in the series. The rest of the names are ignored. For example,
$(firstword foo bar)
produces the result foo
. Although $(firstword text) is the same as $(word 1,text), the firstword function is retained for its simplicity.
$(wildcard pattern)
The argument pattern is a file name pattern, typically containing wildcard characters (as in shell file name patterns). The result of wildcard is a space-separated list of the names of existing files that match the pattern. See section Using Wildcard Characters in File Names.
The foreach Function
The foreach function is very different from other functions. It causes one piece of text to be used repeatedly, each time with a different substitution performed on it. It resembles the for command in the shell sh and the foreach command in the C-shell csh.
The syntax of the foreach function is:
$(foreach var,list,text)
The first two arguments, var and list, are expanded before anything else is done; note that the last argument, text, is not expanded at the same time. Then for each word of the expanded value of list, the variable named by the expanded value of var is set to that word, and text is expanded. Presumably text contains references to that variable, so its expansion will be different each time.
The result is that text is expanded as many times as there are whitespace-separated words in list. The multiple expansions of text are concatenated, with spaces between them, to make the result of foreach.
This simple example sets the variable files
to the list of all files in the directories in the list dirs
:
dirs := a b c dfiles := $(foreach dir,$(dirs),$(wildcard $(dir)/*))
Here text is $(wildcard $(dir)/\*)
. The first repetition finds the value a
for dir, so it produces the same result as $(wildcard a/\*)
; the second repetition produces the result of $(wildcard b/\*)
; and the third, that of $(wildcard c/\*)
.
This example has the same result (except for setting dirs
) as the following example:
files := $(wildcard a/* b/* c/* d/*)
When text is complicated, you can improve readability by giving it a name, with an additional variable:
find_files = $(wildcard $(dir)/*)dirs := a b c d files := $(foreach dir,$(dirs),$(find_files))
Here we use the variable find_files this way. We use plain =
to define a recursively-expanding variable, so that its value contains an actual function call to be reexpanded under the control of foreach; a simply-expanded variable would not do, since wildcard would be called only once at the time of defining find_files.
The foreach function has no permanent effect on the variable var; its value and flavor after the foreach function call are the same as they were beforehand. The other values which are taken from list are in effect only temporarily, during the execution of foreach. The variable var is a simply-expanded variable during the execution of foreach. If var was undefined before the foreach function call, it is undefined after the call. See section The Two Flavors of Variables.
You must take care when using complex variable expressions that result in variable names because many strange things are valid variable names, but are probably not what you intended. For example,
files := $(foreach Esta escrito en espanol!,b c ch,$(find_files))
might be useful if the value of find_files references the variable whose name is Esta escrito en espanol!
(es un nombre bastante largo, no?), but it is more likely to be a mistake.
The if Function
The if function provides support for conditional expansion in a functional context (as opposed to the GNU make makefile conditionals such as ifeq (see section Syntax of Conditionals).
An if function call can contain either two or three arguments:
$(if condition,then-part[,else-part])
The first argument, condition, first has all preceding and trailing whitespace stripped, then is expanded. If it expands to any non-empty string, then the condition is considered to be true. If it expands to an empty string, the condition is considered to be false.
If the condition is true then the second argument, then-part, is evaluated and this is used as the result of the evaluation of the entire if function.
If the condition is false then the third argument, else-part, is evaluated and this is the result of the if function. If there is no third argument, the if function evaluates to nothing (the empty string).
Note that only one of the then-part or the else-part will be evaluated, never both. Thus, either can contain side-effects (such as shell function calls, etc.)
The call Function
The call function is unique in that it can be used to create new parameterized functions. You can write a complex expression as the value of a variable, then use call to expand it with different values.
The syntax of the call function is:
$(call variable,param,param,…)
When make expands this function, it assigns each param to temporary variables $(1), $(2), etc. The variable $(0) will contain variable. There is no maximum number of parameter arguments. There is no minimum, either, but it doesn’t make sense to use call with no parameters.
Then variable is expanded as a make variable in the context of these temporary assignments. Thus, any reference to $(1) in the value of variable will resolve to the first param in the invocation of call.
Note that variable is the name of a variable, not a reference to that variable. Therefore you would not normally use a $
or parentheses when writing it. (You can, however, use a variable reference in the name if you want the name not to be a constant.)
If variable is the name of a builtin function, the builtin function is always invoked (even if a make variable by that name also exists).
The call function expands the param arguments before assigning them to temporary variables. This means that variable values containing references to builtin functions that have special expansion rules, like foreach or if, may not work as you expect.
Some examples may make this clearer.
This macro simply reverses its arguments:
reverse = $(2) $(1)foo = $(call reverse,a,b)
Here foo will contain b a
.
This one is slightly more interesting: it defines a macro to search for the first instance of a program in PATH:
pathsearch = $(firstword $(wildcard $(addsufix /$(1),$(subst :, ,$(PATH))))) LS := $(call pathsearch,ls)
Now the variable LS contains /bin/ls or similar.
The call function can be nested. Each recursive invocation gets its own local values for $(1), etc. that mask the values of higher-level call. For example, here is an implementation of a map function:
map = $(foreach a,$(2),$(call $(1),$(a)))
Now you can map a function that normally takes only one argument, such as origin, to multiple values in one step:
o = $(call map,origin,o map MAKE)
and end up with o containing something like file file default
.
A final caution: be careful when adding whitespace to the arguments to call. As with other functions, any whitespace contained in the second and subsequent arguments is kept; this can cause strange effects. It’s generally safest to remove all extraneous whitespace when providing parameters to call.
The origin Function
The origin function is unlike most other functions in that it does not operate on the values of variables; it tells you something about a variable. Specifically, it tells you where it came from.
The syntax of the origin function is:
$(origin variable)
Note that variable is the name of a variable to inquire about; not a reference to that variable. Therefore you would not normally use a $
or parentheses when writing it. (You can, however, use a variable reference in the name if you want the name not to be a constant.)
The result of this function is a string telling you how the variable variable was defined:
undefined
if variable was never defined.
default
if variable has a default definition, as is usual with CC and so on. See section Variables Used by Implicit Rules. Note that if you have redefined a default variable, the origin function will return the origin of the later definition.
environment
if variable was defined as an environment variable and the -e
option is not turned on (see section Summary of Options).
environment override
if variable was defined as an environment variable and the -e
option is turned on (see section Summary of Options).
file
if variable was defined in a makefile.
command line
if variable was defined on the command line.
override
if variable was defined with an override directive in a makefile (see section The override Directive).
automatic
if variable is an automatic variable defined for the execution of the commands for each rule (see section Automatic Variables).
This information is primarily useful (other than for your curiosity) to determine if you want to believe the value of a variable. For example, suppose you have a makefile foo
that includes another makefile bar
. You want a variable bletch to be defined in bar
if you run the command make -f bar
, even if the environment contains a definition of bletch. However, if foo
defined bletch before including bar
, you do not want to override that definition. This could be done by using an override directive in foo
, giving that definition precedence over the later definition in bar
; unfortunately, the override directive would also override any command line definitions. So, bar
could include:
ifdef bletchifeq “$(origin bletch)” “environment"bletch = barf, gag, etc.endif endif
If bletch has been defined from the environment, this will redefine it.
If you want to override a previous definition of bletch if it came from the environment, even under -e
, you could instead write:
ifneq “$(findstring environment,$(origin bletch))” ““bletch = barf, gag, etc. endif
Here the redefinition takes place if $(origin bletch)
returns either environment
or environment override
. See section Functions for String Substitution and Analysis.
The shell Function
The shell function is unlike any other function except the wildcard function (see section The Function wildcard) in that it communicates with the world outside of make.
The shell function performs the same function that backquotes (`
) perform in most shells: it does command expansion. This means that it takes an argument that is a shell command and returns the output of the command. The only processing make does on the result, before substituting it into the surrounding text, is to convert each newline or carriage-return / newline pair to a single space. It also removes the trailing (carriage-return and) newline, if it’s the last thing in the result.
The commands run by calls to the shell function are run when the function calls are expanded. In most cases, this is when the makefile is read in. The exception is that function calls in the commands of the rules are expanded when the commands are run, and this applies to shell function calls like all others.
Here are some examples of the use of the shell function:
contents := $(shell cat foo)
sets contents to the contents of the file foo
, with a space (rather than a newline) separating each line.
files := $(shell echo *.c)
sets files to the expansion of \*.c
. Unless make is using a very strange shell, this has the same result as $(wildcard \*.c)
.
Functions That Control Make
These functions control the way make runs. Generally, they are used to provide information to the user of the makefile or to cause make to stop if some sort of environmental error is detected.
$(error text…)
Generates a fatal error where the message is text. Note that the error is generated whenever this function is evaluated. So, if you put it inside a command script or on the right side of a recursive variable assignment, it won’t be evaluated until later. The text will be expanded before the error is generated. For example,
ifdef ERROR1$(error error is $(ERROR1))endif
will generate a fatal error during the read of the makefile if the make variable ERROR1 is defined. Or,
ERR = $(error found an error!).PHONY: errerr: ; $(ERR)
will generate a fatal error while make is running, if the err target is invoked.
$(warning text…)
This function works similarly to the error function, above, except that make doesn’t exit. Instead, text is expanded and the resulting message is displayed, but processing of the makefile continues. The result of the expansion of this function is the empty string.
How to Run make
A makefile that says how to recompile a program can be used in more than one way. The simplest use is to recompile every file that is out of date. Usually, makefiles are written so that if you run make with no arguments, it does just that.
But you might want to update only some of the files; you might want to use a different compiler or different compiler options; you might want just to find out which files are out of date without changing them.
By giving arguments when you run make, you can do any of these things and many others.
The exit status of make is always one of three values:
0
The exit status is zero if make is successful.
2
The exit status is two if make encounters any errors. It will print messages describing the particular errors.
1
The exit status is one if you use the -q
flag and make determines that some target is not already up to date. See section Instead of Executing the Commands.
Arguments to Specify the Makefile
The way to specify the name of the makefile is with the -f
or --file
option (--makefile
also works). For example, -f altmake
says to use the file altmake
as the makefile.
If you use the -f
flag several times and follow each -f
with an argument, all the specified files are used jointly as makefiles.
If you do not use the -f
or --file
flag, the default is to try GNUmakefile
, makefile
, and Makefile
, in that order, and use the first of these three which exists or can be made (see section Writing Makefiles).
Arguments to Specify the Goals
The goals are the targets that make should strive ultimately to update. Other targets are updated as well if they appear as prerequisites of goals, or prerequisites of prerequisites of goals, etc.
By default, the goal is the first target in the makefile (not counting targets that start with a period). Therefore, makefiles are usually written so that the first target is for compiling the entire program or programs they describe. If the first rule in the makefile has several targets, only the first target in the rule becomes the default goal, not the whole list.
You can specify a different goal or goals with arguments to make. Use the name of the goal as an argument. If you specify several goals, make processes each of them in turn, in the order you name them.
Any target in the makefile may be specified as a goal (unless it starts with -
or contains an =
, in which case it will be parsed as a switch or variable definition, respectively). Even targets not in the makefile may be specified, if make can find implicit rules that say how to make them.
Make will set the special variable MAKECMDGOALS to the list of goals you specified on the command line. If no goals were given on the command line, this variable is empty. Note that this variable should be used only in special circumstances.
An example of appropriate use is to avoid including .d
files during clean rules (see section Generating Prerequisites Automatically), so make won’t create them only to immediately remove them again:
sources = foo.c bar.cifneq ($(MAKECMDGOALS),clean)include $(sources:.c=.d)endif
One use of specifying a goal is if you want to compile only a part of the program, or only one of several programs. Specify as a goal each file that you wish to remake. For example, consider a directory containing several programs, with a makefile that starts like this:
.PHONY: allall: size nm ld ar as
If you are working on the program size, you might want to say make size
so that only the files of that program are recompiled.
Another use of specifying a goal is to make files that are not normally made. For example, there may be a file of debugging output, or a version of the program that is compiled specially for testing, which has a rule in the makefile but is not a prerequisite of the default goal.
Another use of specifying a goal is to run the commands associated with a phony target (see section Phony Targets) or empty target (see section Empty Target Files to Record Events). Many makefiles contain a phony target named clean
which deletes everything except source files. Naturally, this is done only if you request it explicitly with make clean
. Following is a list of typical phony and empty target names. See section Standard Targets for Users, for a detailed list of all the standard target names which GNU software packages use.
all
Make all the top-level targets the makefile knows about.
clean
Delete all files that are normally created by running make.
mostlyclean
Like clean
, but may refrain from deleting a few files that people normally don’t want to recompile. For example, the mostlyclean
target for GCC does not delete libgcc.a
, because recompiling it is rarely necessary and takes a lot of time.
distclean
realclean
clobber
Any of these targets might be defined to delete more files than clean
does. For example, this would delete configuration files or links that you would normally create as preparation for compilation, even if the makefile itself cannot create these files.
install
Copy the executable file into a directory that users typically search for commands; copy any auxiliary files that the executable uses into the directories where it will look for them.
print
Print listings of the source files that have changed.
tar
Create a tar file of the source files.
shar
Create a shell archive (shar file) of the source files.
dist
Create a distribution file of the source files. This might be a tar file, or a shar file, or a compressed version of one of the above, or even more than one of the above.
TAGS
Update a tags table for this program.
check
test
Perform self tests on the program this makefile builds.
Instead of Executing the Commands
The makefile tells make how to tell whether a target is up to date, and how to update each target. But updating the targets is not always what you want. Certain options specify other activities for make.
-n
--just-print
--dry-run
--recon
"No-op”. The activity is to print what commands would be used to make the targets up to date, but not actually execute them.
-t
--touch
"Touch”. The activity is to mark the targets as up to date without actually changing them. In other words, make pretends to compile the targets but does not really change their contents.
-q
--question
"Question”. The activity is to find out silently whether the targets are up to date already; but execute no commands in either case. In other words, neither compilation nor output will occur.
-W file
--what-if=file
--assume-new=file
--new-file=file
"What if”. Each -W
flag is followed by a file name. The given files’ modification times are recorded by make as being the present time, although the actual modification times remain the same. You can use the -W
flag in conjunction with the -n
flag to see what would happen if you were to modify specific files.
With the -n
flag, make prints the commands that it would normally execute but does not execute them.
With the -t
flag, make ignores the commands in the rules and uses (in effect) the command touch for each target that needs to be remade. The touch command is also printed, unless -s
or .SILENT is used. For speed, make does not actually invoke the program touch. It does the work directly.
With the -q
flag, make prints nothing and executes no commands, but the exit status code it returns is zero if and only if the targets to be considered are already up to date. If the exit status is one, then some updating needs to be done. If make encounters an error, the exit status is two, so you can distinguish an error from a target that is not up to date.
It is an error to use more than one of these three flags in the same invocation of make.
The -n
, -t
, and -q
options do not affect command lines that begin with +
characters or contain the strings $(MAKE)
or ${MAKE}
. Note that only the line containing the +
character or the strings $(MAKE)
or ${MAKE}
is run regardless of these options. Other lines in the same rule are not run unless they too begin with +
or contain $(MAKE)
or ${MAKE}
(See section How the MAKE Variable Works.)
The -W
flag provides two features:
- If you also use the
-n
or-q
flag, you can see what make would do if you were to modify some files. - Without the
-n
or-q
flag, when make is actually executing commands, the-W
flag can direct make to act as if some files had been modified, without actually modifying the files.
Note that the options -p
and -v
allow you to obtain other information about make or about the makefiles in use (see section Summary of Options).
Avoiding Recompilation of Some Files
Sometimes you may have changed a source file but you do not want to recompile all the files that depend on it. For example, suppose you add a macro or a declaration to a header file that many other files depend on. Being conservative, make assumes that any change in the header file requires recompilation of all dependent files, but you know that they do not need to be recompiled and you would rather not waste the time waiting for them to compile.
If you anticipate the problem before changing the header file, you can use the -t
flag. This flag tells make not to run the commands in the rules, but rather to mark the target up to date by changing its last-modification date. You would follow this procedure:
- Use the command
make
to recompile the source files that really need recompilation. - Make the changes in the header files.
- Use the command
make -t
to mark all the object files as up to date. The next time you run make, the changes in the header files will not cause any recompilation.
If you have already changed the header file at a time when some files do need recompilation, it is too late to do this. Instead, you can use the -o file
flag, which marks a specified file as “old” (see section Summary of Options). This means that the file itself will not be remade, and nothing else will be remade on its account. Follow this procedure:
- Recompile the source files that need compilation for reasons independent of the particular header file, with
make -o headerfile
. If several header files are involved, use a separate-o
option for each header file. - Touch all the object files with
make -t
.
Overriding Variables
An argument that contains =
specifies the value of a variable: v\=x
sets the value of the variable v to x. If you specify a value in this way, all ordinary assignments of the same variable in the makefile are ignored; we say they have been overridden by the command line argument.
The most common way to use this facility is to pass extra flags to compilers. For example, in a properly written makefile, the variable CFLAGS is included in each command that runs the C compiler, so a file foo.c
would be compiled something like this:
cc -c $(CFLAGS) foo.c
Thus, whatever value you set for CFLAGS affects each compilation that occurs. The makefile probably specifies the usual value for CFLAGS, like this:
CFLAGS=-g
Each time you run make, you can override this value if you wish. For example, if you say make CFLAGS=-g -O''
, each C compilation will be done with cc -c -g -O
. (This illustrates how you can use quoting in the shell to enclose spaces and other special characters in the value of a variable when you override it.)
The variable CFLAGS is only one of many standard variables that exist just so that you can change them this way. See section Variables Used by Implicit Rules, for a complete list.
You can also program the makefile to look at additional variables of your own, giving the user the ability to control other aspects of how the makefile works by changing the variables.
When you override a variable with a command argument, you can define either a recursively-expanded variable or a simply-expanded variable. The examples shown above make a recursively-expanded variable; to make a simply-expanded variable, write :=
instead of =
. But, unless you want to include a variable reference or function call in the value that you specify, it makes no difference which kind of variable you create.
There is one way that the makefile can change a variable that you have overridden. This is to use the override directive, which is a line that looks like this: override variable = value
(see section The override Directive).
Testing the Compilation of a Program
Normally, when an error happens in executing a shell command, make gives up immediately, returning a nonzero status. No further commands are executed for any target. The error implies that the goal cannot be correctly remade, and make reports this as soon as it knows.
When you are compiling a program that you have just changed, this is not what you want. Instead, you would rather that make try compiling every file that can be tried, to show you as many compilation errors as possible.
On these occasions, you should use the -k
or --keep-going
flag. This tells make to continue to consider the other prerequisites of the pending targets, remaking them if necessary, before it gives up and returns nonzero status. For example, after an error in compiling one object file, make -k
will continue compiling other object files even though it already knows that linking them will be impossible. In addition to continuing after failed shell commands, make -k
will continue as much as possible after discovering that it does not know how to make a target or prerequisite file. This will always cause an error message, but without -k
, it is a fatal error (see section Summary of Options).
The usual behavior of make assumes that your purpose is to get the goals up to date; once make learns that this is impossible, it might as well report the failure immediately. The -k
flag says that the real purpose is to test as much as possible of the changes made in the program, perhaps to find several independent problems so that you can correct them all before the next attempt to compile. This is why Emacs’ M-x compile
command passes the -k
flag by default.
Summary of Options
Here is a table of all the options make understands:
-b
-m
These options are ignored for compatibility with other versions of make.
-C dir
--directory=dir
Change to directory dir before reading the makefiles. If multiple -C
options are specified, each is interpreted relative to the previous one: -C / -C etc
is equivalent to -C /etc
. This is typically used with recursive invocations of make (see section Recursive Use of make).
-d
Print debugging information in addition to normal processing. The debugging information says which files are being considered for remaking, which file-times are being compared and with what results, which files actually need to be remade, which implicit rules are considered and which are applied–everything interesting about how make decides what to do. The -d option is equivalent to --debug=a
(see below).
--debug\[=options\]
Print debugging information in addition to normal processing. Various levels and types of output can be chosen. With no arguments, print the “basic” level of debugging. Possible arguments are below; only the first character is considered, and values must be comma- or space-separated.
a (all)
All types of debugging output are enabled. This is equivalent to using -d
.
b (basic)
Basic debugging prints each target that was found to be out-of-date, and whether the build was successful or not.
v (verbose)
A level above basic
; includes messages about which makefiles were parsed, prerequisites that did not need to be rebuilt, etc. This option also enables basic
messages.
i (implicit)
Prints messages describing the implicit rule searches for each target. This option also enables basic
messages.
j (jobs)
Prints messages giving details on the invocation of specific subcommands.
m (makefile)
By default, the above messages are not enabled while trying to remake the makefiles. This option enables messages while rebuilding makefiles, too. Note that the all
option does enable this option. This option also enables basic
messages.
-e
--environment-overrides
Give variables taken from the environment precedence over variables from makefiles. See section Variables from the Environment.
-f file
--file=file
--makefile=file
Read the file named file as a makefile. See section Writing Makefiles.
-h
--help
Remind you of the options that make understands and then exit.
-i
--ignore-errors
Ignore all errors in commands executed to remake files. See section Errors in Commands.
-I dir
--include-dir=dir
Specifies a directory dir to search for included makefiles. See section Including Other Makefiles. If several -I
options are used to specify several directories, the directories are searched in the order specified.
-j \[jobs\]
--jobs\[=jobs\]
Specifies the number of jobs (commands) to run simultaneously. With no argument, make runs as many jobs simultaneously as possible. If there is more than one -j
option, the last one is effective. See section Parallel Execution, for more information on how commands are run. Note that this option is ignored on MS-DOS.
-k
--keep-going
Continue as much as possible after an error. While the target that failed, and those that depend on it, cannot be remade, the other prerequisites of these targets can be processed all the same. See section Testing the Compilation of a Program.
-l \[load\]
--load-average\[=load\]
--max-load\[=load\]
Specifies that no new jobs (commands) should be started if there are other jobs running and the load average is at least load (a floating-point number). With no argument, removes a previous load limit. See section Parallel Execution.
-n
--just-print
--dry-run
--recon
Print the commands that would be executed, but do not execute them. See section Instead of Executing the Commands.
-o file
--old-file=file
--assume-old=file
Do not remake the file file even if it is older than its prerequisites, and do not remake anything on account of changes in file. Essentially the file is treated as very old and its rules are ignored. See section Avoiding Recompilation of Some Files.
-p
--print-data-base
Print the data base (rules and variable values) that results from reading the makefiles; then execute as usual or as otherwise specified. This also prints the version information given by the -v
switch (see below). To print the data base without trying to remake any files, use make -qp
. To print the data base of predefined rules and variables, use make -p -f /dev/null
. The data base output contains filename and linenumber information for command and variable definitions, so it can be a useful debugging tool in complex environments.
-q
--question
"Question mode”. Do not run any commands, or print anything; just return an exit status that is zero if the specified targets are already up to date, one if any remaking is required, or two if an error is encountered. See section Instead of Executing the Commands.
-r
--no-builtin-rules
Eliminate use of the built-in implicit rules (see section Using Implicit Rules). You can still define your own by writing pattern rules (see section Defining and Redefining Pattern Rules). The -r
option also clears out the default list of suffixes for suffix rules (see section Old-Fashioned Suffix Rules). But you can still define your own suffixes with a rule for .SUFFIXES, and then define your own suffix rules. Note that only rules are affected by the -r option; default variables remain in effect (see section Variables Used by Implicit Rules); see the -R
option below.
-R
--no-builtin-variables
Eliminate use of the built-in rule-specific variables (see section Variables Used by Implicit Rules). You can still define your own, of course. The -R
option also automatically enables the -r
option (see above), since it doesn’t make sense to have implicit rules without any definitions for the variables that they use.
-s
--silent
--quiet
Silent operation; do not print the commands as they are executed. See section Command Echoing.
-S
--no-keep-going
--stop
Cancel the effect of the -k
option. This is never necessary except in a recursive make where -k
might be inherited from the top-level make via MAKEFLAGS (see section Recursive Use of make) or if you set -k
in MAKEFLAGS in your environment.
-t
--touch
Touch files (mark them up to date without really changing them) instead of running their commands. This is used to pretend that the commands were done, in order to fool future invocations of make. See section Instead of Executing the Commands.
-v
--version
Print the version of the make program plus a copyright, a list of authors, and a notice that there is no warranty; then exit.
-w
--print-directory
Print a message containing the working directory both before and after executing the makefile. This may be useful for tracking down errors from complicated nests of recursive make commands. See section Recursive Use of make. (In practice, you rarely need to specify this option since make
does it for you; see section The --print-directory
Option.)
--no-print-directory
Disable printing of the working directory under -w. This option is useful when -w is turned on automatically, but you do not want to see the extra messages. See section The --print-directory
Option.
-W file
--what-if=file
--new-file=file
--assume-new=file
Pretend that the target file has just been modified. When used with the -n
flag, this shows you what would happen if you were to modify that file. Without -n
, it is almost the same as running a touch command on the given file before running make, except that the modification time is changed only in the imagination of make. See section Instead of Executing the Commands.
--warn-undefined-variables
Issue a warning message whenever make sees a reference to an undefined variable. This can be helpful when you are trying to debug makefiles which use variables in complex ways.
Using Implicit Rules
Certain standard ways of remaking target files are used very often. For example, one customary way to make an object file is from a C source file using the C compiler, cc.
Implicit rules tell make how to use customary techniques so that you do not have to specify them in detail when you want to use them. For example, there is an implicit rule for C compilation. File names determine which implicit rules are run. For example, C compilation typically takes a .c
file and makes a .o
file. So make applies the implicit rule for C compilation when it sees this combination of file name endings.
A chain of implicit rules can apply in sequence; for example, make will remake a .o
file from a .y
file by way of a .c
file. See section Chains of Implicit Rules.
The built-in implicit rules use several variables in their commands so that, by changing the values of the variables, you can change the way the implicit rule works. For example, the variable CFLAGS controls the flags given to the C compiler by the implicit rule for C compilation. See section Variables Used by Implicit Rules.
You can define your own implicit rules by writing pattern rules. See section Defining and Redefining Pattern Rules.
Suffix rules are a more limited way to define implicit rules. Pattern rules are more general and clearer, but suffix rules are retained for compatibility. See section Old-Fashioned Suffix Rules.
Using Implicit Rules
To allow make to find a customary method for updating a target file, all you have to do is refrain from specifying commands yourself. Either write a rule with no command lines, or don’t write a rule at all. Then make will figure out which implicit rule to use based on which kind of source file exists or can be made.
For example, suppose the makefile looks like this:
foo : foo.o bar.o cc -o foo foo.o bar.o $(CFLAGS) $(LDFLAGS)
Because you mention foo.o
but do not give a rule for it, make will automatically look for an implicit rule that tells how to update it. This happens whether or not the file foo.o
currently exists.
If an implicit rule is found, it can supply both commands and one or more prerequisites (the source files). You would want to write a rule for foo.o
with no command lines if you need to specify additional prerequisites, such as header files, that the implicit rule cannot supply.
Each implicit rule has a target pattern and prerequisite patterns. There may be many implicit rules with the same target pattern. For example, numerous rules make .o
files: one, from a .c
file with the C compiler; another, from a .p
file with the Pascal compiler; and so on. The rule that actually applies is the one whose prerequisites exist or can be made. So, if you have a file foo.c
, make will run the C compiler; otherwise, if you have a file foo.p
, make will run the Pascal compiler; and so on.
Of course, when you write the makefile, you know which implicit rule you want make to use, and you know it will choose that one because you know which possible prerequisite files are supposed to exist. See section Catalogue of Implicit Rules, for a catalogue of all the predefined implicit rules.
Above, we said an implicit rule applies if the required prerequisites “exist or can be made”. A file “can be made” if it is mentioned explicitly in the makefile as a target or a prerequisite, or if an implicit rule can be recursively found for how to make it. When an implicit prerequisite is the result of another implicit rule, we say that chaining is occurring. See section Chains of Implicit Rules.
In general, make searches for an implicit rule for each target, and for each double-colon rule, that has no commands. A file that is mentioned only as a prerequisite is considered a target whose rule specifies nothing, so implicit rule search happens for it. See section Implicit Rule Search Algorithm, for the details of how the search is done.
Note that explicit prerequisites do not influence implicit rule search. For example, consider this explicit rule:
foo.o: foo.p
The prerequisite on foo.p
does not necessarily mean that make will remake foo.o
according to the implicit rule to make an object file, a .o
file, from a Pascal source file, a .p
file. For example, if foo.c
also exists, the implicit rule to make an object file from a C source file is used instead, because it appears before the Pascal rule in the list of predefined implicit rules (see section Catalogue of Implicit Rules).
If you do not want an implicit rule to be used for a target that has no commands, you can give that target empty commands by writing a semicolon (see section Using Empty Commands).
Catalogue of Implicit Rules
Here is a catalogue of predefined implicit rules which are always available unless the makefile explicitly overrides or cancels them. See section Canceling Implicit Rules, for information on canceling or overriding an implicit rule. The -r
or --no-builtin-rules
option cancels all predefined rules.
Not all of these rules will always be defined, even when the -r
option is not given. Many of the predefined implicit rules are implemented in make as suffix rules, so which ones will be defined depends on the suffix list (the list of prerequisites of the special target .SUFFIXES). The default suffix list is: .out, .a, .ln, .o, .c, .cc, .C, .p, .f, .F, .r, .y, .l, .s, .S, .mod, .sym, .def, .h, .info, .dvi, .tex, .texinfo, .texi, .txinfo, .w, .ch .web, .sh, .elc, .el. All of the implicit rules described below whose prerequisites have one of these suffixes are actually suffix rules. If you modify the suffix list, the only predefined suffix rules in effect will be those named by one or two of the suffixes that are on the list you specify; rules whose suffixes fail to be on the list are disabled. See section Old-Fashioned Suffix Rules, for full details on suffix rules.
Compiling C programs
n.o
is made automatically from n.c
with a command of the form $(CC) -c $(CPPFLAGS) $(CFLAGS)
.
Compiling C++ programs
n.o
is made automatically from n.cc
or n.C
with a command of the form $(CXX) -c $(CPPFLAGS) $(CXXFLAGS)
. We encourage you to use the suffix .cc
for C++ source files instead of .C
.
Compiling Pascal programs
n.o
is made automatically from n.p
with the command $(PC) -c $(PFLAGS)
.
Compiling Fortran and Ratfor programs
n.o
is made automatically from n.r
, n.F
or n.f
by running the Fortran compiler. The precise command used is as follows:
.f
$(FC) -c $(FFLAGS)
.
.F
$(FC) -c $(FFLAGS) $(CPPFLAGS)
.
.r
$(FC) -c $(FFLAGS) $(RFLAGS)
.
Preprocessing Fortran and Ratfor programs
n.f
is made automatically from n.r
or n.F
. This rule runs just the preprocessor to convert a Ratfor or preprocessable Fortran program into a strict Fortran program. The precise command used is as follows:
.F
$(FC) -F $(CPPFLAGS) $(FFLAGS)
.
.r
$(FC) -F $(FFLAGS) $(RFLAGS)
.
Compiling Modula-2 programs
n.sym
is made from n.def
with a command of the form $(M2C) $(M2FLAGS) $(DEFFLAGS)
. n.o
is made from n.mod
; the form is: $(M2C) $(M2FLAGS) $(MODFLAGS)
.
Assembling and preprocessing assembler programs
n.o
is made automatically from n.s
by running the assembler, as. The precise command is $(AS) $(ASFLAGS)
.
n.s
is made automatically from n.S
by running the C preprocessor, cpp. The precise command is $(CPP) $(CPPFLAGS)
.
Linking a single object file
n
is made automatically from n.o
by running the linker (usually called ld) via the C compiler. The precise command used is $(CC) $(LDFLAGS) n.o $(LOADLIBES) $(LDLIBS)
. This rule does the right thing for a simple program with only one source file. It will also do the right thing if there are multiple object files (presumably coming from various other source files), one of which has a name matching that of the executable file. Thus,
x: y.o z.o
when x.c
, y.c
and z.c
all exist will execute:
cc -c x.c -o x.occ -c y.c -o y.occ -c z.c -o z.occ x.o y.o z.o -o xrm -f x.o rm -f y.orm -f z.o
In more complicated cases, such as when there is no object file whose name derives from the executable file name, you must write an explicit command for linking. Each kind of file automatically made into .o
object files will be automatically linked by using the compiler ($(CC)
, $(FC)
or $(PC)
; the C compiler $(CC)
is used to assemble .s
files) without the -c
option. This could be done by using the .o
object files as intermediates, but it is faster to do the compiling and linking in one step, so that’s how it’s done.
Yacc for C programs
n.c
is made automatically from n.y
by running Yacc with the command $(YACC) $(YFLAGS)
.
Lex for C programs
n.c
is made automatically from n.l
by by running Lex. The actual command is $(LEX) $(LFLAGS)
.
Lex for Ratfor programs
n.r
is made automatically from n.l
by by running Lex. The actual command is $(LEX) $(LFLAGS)
. The convention of using the same suffix .l
for all Lex files regardless of whether they produce C code or Ratfor code makes it impossible for make to determine automatically which of the two languages you are using in any particular case. If make is called upon to remake an object file from a .l
file, it must guess which compiler to use. It will guess the C compiler, because that is more common. If you are using Ratfor, make sure make knows this by mentioning n.r
in the makefile. Or, if you are using Ratfor exclusively, with no C files, remove .c
from the list of implicit rule suffixes with:
.SUFFIXES:.SUFFIXES: .o .r .f .l …
Making Lint Libraries from C, Yacc, or Lex programs
n.ln
is made from n.c
by running lint. The precise command is $(LINT) $(LINTFLAGS) $(CPPFLAGS) -i
. The same command is used on the C code produced from n.y
or n.l
.
TeX and Web
n.dvi
is made from n.tex
with the command $(TEX)
. n.tex
is made from n.web
with $(WEAVE)
, or from n.w
(and from n.ch
if it exists or can be made) with $(CWEAVE)
. n.p
is made from n.web
with $(TANGLE)
and n.c
is made from n.w
(and from n.ch
if it exists or can be made) with $(CTANGLE)
.
Texinfo and Info
n.dvi
is made from n.texinfo
, n.texi
, or n.txinfo
, with the command $(TEXI2DVI) $(TEXI2DVI\_FLAGS)
. n.info
is made from n.texinfo
, n.texi
, or n.txinfo
, with the command $(MAKEINFO) $(MAKEINFO\_FLAGS)
.
RCS
Any file n
is extracted if necessary from an RCS file named either n,v
or RCS/n,v
. The precise command used is $(CO) $(COFLAGS)
. n
will not be extracted from RCS if it already exists, even if the RCS file is newer. The rules for RCS are terminal (see section Match-Anything Pattern Rules), so RCS files cannot be generated from another source; they must actually exist.
SCCS
Any file n
is extracted if necessary from an SCCS file named either s.n
or SCCS/s.n
. The precise command used is $(GET) $(GFLAGS)
. The rules for SCCS are terminal (see section Match-Anything Pattern Rules), so SCCS files cannot be generated from another source; they must actually exist.
For the benefit of SCCS, a file n
is copied from n.sh
and made executable (by everyone). This is for shell scripts that are checked into SCCS. Since RCS preserves the execution permission of a file, you do not need to use this feature with RCS. We recommend that you avoid using of SCCS. RCS is widely held to be superior, and is also free. By choosing free software in place of comparable (or inferior) proprietary software, you support the free software movement.
Usually, you want to change only the variables listed in the table above, which are documented in the following section.
However, the commands in built-in implicit rules actually use variables such as COMPILE.c, LINK.p, and PREPROCESS.S, whose values contain the commands listed above.
make follows the convention that the rule to compile a .x
source file uses the variable COMPILE.x. Similarly, the rule to produce an executable from a .x
file uses LINK.x; and the rule to preprocess a .x
file uses PREPROCESS.x.
Every rule that produces an object file uses the variable OUTPUT_OPTION. make defines this variable either to contain -o $@
, or to be empty, depending on a compile-time option. You need the -o
option to ensure that the output goes into the right file when the source file is in a different directory, as when using VPATH (see section Searching Directories for Prerequisites). However, compilers on some systems do not accept a -o
switch for object files. If you use such a system, and use VPATH, some compilations will put their output in the wrong place. A possible workaround for this problem is to give OUTPUT_OPTION the value ; mv $\*.o $@
.
Variables Used by Implicit Rules
The commands in built-in implicit rules make liberal use of certain predefined variables. You can alter these variables in the makefile, with arguments to make, or in the environment to alter how the implicit rules work without redefining the rules themselves. You can cancel all variables used by implicit rules with the -R
or --no-builtin-variables
option.
For example, the command used to compile a C source file actually says $(CC) -c $(CFLAGS) $(CPPFLAGS)
. The default values of the variables used are cc
and nothing, resulting in the command cc -c
. By redefining CC
to ncc
, you could cause ncc
to be used for all C compilations performed by the implicit rule. By redefining CFLAGS
to be -g
, you could pass the -g
option to each compilation. All implicit rules that do C compilation use $(CC)
to get the program name for the compiler and all include $(CFLAGS)
among the arguments given to the compiler.
The variables used in implicit rules fall into two classes: those that are names of programs (like CC) and those that contain arguments for the programs (like CFLAGS). (The “name of a program” may also contain some command arguments, but it must start with an actual executable program name.) If a variable value contains more than one argument, separate them with spaces.
Here is a table of variables used as names of programs in built-in rules:
AR
Archive-maintaining program; default ar
.
AS
Program for doing assembly; default as
.
CC
Program for compiling C programs; default cc
.
CXX
Program for compiling C++ programs; default g++
.
CO
Program for extracting a file from RCS; default co
.
CPP
Program for running the C preprocessor, with results to standard output; default $(CC) -E
.
FC
Program for compiling or preprocessing Fortran and Ratfor programs; default f77
.
GET
Program for extracting a file from SCCS; default get
.
LEX
Program to use to turn Lex grammars into C programs or Ratfor programs; default lex
.
PC
Program for compiling Pascal programs; default pc
.
YACC
Program to use to turn Yacc grammars into C programs; default yacc
.
YACCR
Program to use to turn Yacc grammars into Ratfor programs; default yacc -r
.
MAKEINFO
Program to convert a Texinfo source file into an Info file; default makeinfo
.
TEX
Program to make TeX DVI files from TeX source; default tex
.
TEXI2DVI
Program to make TeX DVI files from Texinfo source; default texi2dvi
.
WEAVE
Program to translate Web into TeX; default weave
.
CWEAVE
Program to translate C Web into TeX; default cweave
.
TANGLE
Program to translate Web into Pascal; default tangle
.
CTANGLE
Program to translate C Web into C; default ctangle
.
RM
Command to remove a file; default rm -f
.
Here is a table of variables whose values are additional arguments for the programs above. The default values for all of these is the empty string, unless otherwise noted.
ARFLAGS
Flags to give the archive-maintaining program; default rv
.
ASFLAGS
Extra flags to give to the assembler (when explicitly invoked on a .s
or .S
file).
CFLAGS
Extra flags to give to the C compiler.
CXXFLAGS
Extra flags to give to the C++ compiler.
COFLAGS
Extra flags to give to the RCS co program.
CPPFLAGS
Extra flags to give to the C preprocessor and programs that use it (the C and Fortran compilers).
FFLAGS
Extra flags to give to the Fortran compiler.
GFLAGS
Extra flags to give to the SCCS get program.
LDFLAGS
Extra flags to give to compilers when they are supposed to invoke the linker, ld
.
LFLAGS
PFLAGS
Extra flags to give to the Pascal compiler.
RFLAGS
Extra flags to give to the Fortran compiler for Ratfor programs.
YFLAGS
Chains of Implicit Rules
Sometimes a file can be made by a sequence of implicit rules. For example, a file n.o
could be made from n.y
by running first Yacc and then cc. Such a sequence is called a chain.
If the file n.c
exists, or is mentioned in the makefile, no special searching is required: make finds that the object file can be made by C compilation from n.c
; later on, when considering how to make n.c
, the rule for running Yacc is used. Ultimately both n.c
and n.o
are updated.
However, even if n.c
does not exist and is not mentioned, make knows how to envision it as the missing link between n.o
and n.y
! In this case, n.c
is called an intermediate file. Once make has decided to use the intermediate file, it is entered in the data base as if it had been mentioned in the makefile, along with the implicit rule that says how to create it.
Intermediate files are remade using their rules just like all other files. But intermediate files are treated differently in two ways.
The first difference is what happens if the intermediate file does not exist. If an ordinary file b does not exist, and make considers a target that depends on b, it invariably creates b and then updates the target from b. But if b is an intermediate file, then make can leave well enough alone. It won’t bother updating b, or the ultimate target, unless some prerequisite of b is newer than that target or there is some other reason to update that target.
The second difference is that if make does create b in order to update something else, it deletes b later on after it is no longer needed. Therefore, an intermediate file which did not exist before make also does not exist after make. make reports the deletion to you by printing a rm -f
command showing which file it is deleting.
Ordinarily, a file cannot be intermediate if it is mentioned in the makefile as a target or prerequisite. However, you can explicitly mark a file as intermediate by listing it as a prerequisite of the special target .INTERMEDIATE. This takes effect even if the file is mentioned explicitly in some other way.
You can prevent automatic deletion of an intermediate file by marking it as a secondary file. To do this, list it as a prerequisite of the special target .SECONDARY. When a file is secondary, make will not create the file merely because it does not already exist, but make does not automatically delete the file. Marking a file as secondary also marks it as intermediate.
You can list the target pattern of an implicit rule (such as %.o
) as a prerequisite of the special target .PRECIOUS to preserve intermediate files made by implicit rules whose target patterns match that file’s name; see section Interrupting or Killing make.
A chain can involve more than two implicit rules. For example, it is possible to make a file foo
from RCS/foo.y,v
by running RCS, Yacc and cc. Then both foo.y
and foo.c
are intermediate files that are deleted at the end.
No single implicit rule can appear more than once in a chain. This means that make will not even consider such a ridiculous thing as making foo
from foo.o.o
by running the linker twice. This constraint has the added benefit of preventing any infinite loop in the search for an implicit rule chain.
There are some special implicit rules to optimize certain cases that would otherwise be handled by rule chains. For example, making foo
from foo.c
could be handled by compiling and linking with separate chained rules, using foo.o
as an intermediate file. But what actually happens is that a special rule for this case does the compilation and linking with a single cc command. The optimized rule is used in preference to the step-by-step chain because it comes earlier in the ordering of rules.
Defining and Redefining Pattern Rules
You define an implicit rule by writing a pattern rule. A pattern rule looks like an ordinary rule, except that its target contains the character %
(exactly one of them). The target is considered a pattern for matching file names; the %
can match any nonempty substring, while other characters match only themselves. The prerequisites likewise use %
to show how their names relate to the target name.
Thus, a pattern rule %.o : %.c
says how to make any file stem.o
from another file stem.c
.
Note that expansion using %
in pattern rules occurs after any variable or function expansions, which take place when the makefile is read. See section How to Use Variables, and section Functions for Transforming Text.
Introduction to Pattern Rules
A pattern rule contains the character %
(exactly one of them) in the target; otherwise, it looks exactly like an ordinary rule. The target is a pattern for matching file names; the %
matches any nonempty substring, while other characters match only themselves.
For example, %.c
as a pattern matches any file name that ends in .c
. s.%.c
as a pattern matches any file name that starts with s.
, ends in .c
and is at least five characters long. (There must be at least one character to match the %
.) The substring that the %
matches is called the stem.
%
in a prerequisite of a pattern rule stands for the same stem that was matched by the %
in the target. In order for the pattern rule to apply, its target pattern must match the file name under consideration, and its prerequisite patterns must name files that exist or can be made. These files become prerequisites of the target.
Thus, a rule of the form
%.o : %.c ; command…
specifies how to make a file n.o
, with another file n.c
as its prerequisite, provided that n.c
exists or can be made.
There may also be prerequisites that do not use %
; such a prerequisite attaches to every file made by this pattern rule. These unvarying prerequisites are useful occasionally.
A pattern rule need not have any prerequisites that contain %
, or in fact any prerequisites at all. Such a rule is effectively a general wildcard. It provides a way to make any file that matches the target pattern. See section Defining Last-Resort Default Rules.
Pattern rules may have more than one target. Unlike normal rules, this does not act as many different rules with the same prerequisites and commands. If a pattern rule has multiple targets, make knows that the rule’s commands are responsible for making all of the targets. The commands are executed only once to make all the targets. When searching for a pattern rule to match a target, the target patterns of a rule other than the one that matches the target in need of a rule are incidental: make worries only about giving commands and prerequisites to the file presently in question. However, when this file’s commands are run, the other targets are marked as having been updated themselves.
The order in which pattern rules appear in the makefile is important since this is the order in which they are considered. Of equally applicable rules, only the first one found is used. The rules you write take precedence over those that are built in. Note however, that a rule whose prerequisites actually exist or are mentioned always takes priority over a rule with prerequisites that must be made by chaining other implicit rules.
Pattern Rule Examples
Here are some examples of pattern rules actually predefined in make. First, the rule that compiles .c
files into .o
files:
%.o : %.c $(CC) -c $(CFLAGS) $(CPPFLAGS) $< -o $@
defines a rule that can make any file x.o
from x.c
. The command uses the automatic variables $@
and $\<
to substitute the names of the target file and the source file in each case where the rule applies (see section Automatic Variables).
Here is a second built-in rule:
% :: RCS/%,v $(CO) $(COFLAGS) $<
defines a rule that can make any file x
whatsoever from a corresponding file x,v
in the subdirectory RCS
. Since the target is %
, this rule will apply to any file whatever, provided the appropriate prerequisite file exists. The double colon makes the rule terminal, which means that its prerequisite may not be an intermediate file (see section Match-Anything Pattern Rules).
This pattern rule has two targets:
%.tab.c %.tab.h: %.y bison -d $<
This tells make that the command bison -d x.y
will make both x.tab.c
and x.tab.h
. If the file foo
depends on the files parse.tab.o
and scan.o
and the file scan.o
depends on the file parse.tab.h
, when parse.y
is changed, the command bison -d parse.y
will be executed only once, and the prerequisites of both parse.tab.o
and scan.o
will be satisfied. (Presumably the file parse.tab.o
will be recompiled from parse.tab.c
and the file scan.o
from scan.c
, while foo
is linked from parse.tab.o
, scan.o
, and its other prerequisites, and it will execute happily ever after.)
Automatic Variables
Suppose you are writing a pattern rule to compile a .c
file into a .o
file: how do you write the cc
command so that it operates on the right source file name? You cannot write the name in the command, because the name is different each time the implicit rule is applied.
What you do is use a special feature of make, the automatic variables. These variables have values computed afresh for each rule that is executed, based on the target and prerequisites of the rule. In this example, you would use $@
for the object file name and $\<
for the source file name.
Here is a table of automatic variables:
$@
The file name of the target of the rule. If the target is an archive member, then $@
is the name of the archive file. In a pattern rule that has multiple targets (see section Introduction to Pattern Rules), $@
is the name of whichever target caused the rule’s commands to be run.
$%
The target member name, when the target is an archive member. See section Using make to Update Archive Files. For example, if the target is foo.a(bar.o)
then $%
is bar.o
and $@
is foo.a
. $%
is empty when the target is not an archive member.
$<
The name of the first prerequisite. If the target got its commands from an implicit rule, this will be the first prerequisite added by the implicit rule (see section Using Implicit Rules).
$?
The names of all the prerequisites that are newer than the target, with spaces between them. For prerequisites which are archive members, only the member named is used (see section Using make to Update Archive Files).
$^
The names of all the prerequisites, with spaces between them. For prerequisites which are archive members, only the member named is used (see section Using make to Update Archive Files). A target has only one prerequisite on each other file it depends on, no matter how many times each file is listed as a prerequisite. So if you list a prerequisite more than once for a target, the value of $^ contains just one copy of the name.
$+
This is like $^
, but prerequisites listed more than once are duplicated in the order they were listed in the makefile. This is primarily useful for use in linking commands where it is meaningful to repeat library file names in a particular order.
$*
The stem with which an implicit rule matches (see section How Patterns Match). If the target is dir/a.foo.b
and the target pattern is a.%.b
then the stem is dir/foo
. The stem is useful for constructing names of related files.
In a static pattern rule, the stem is part of the file name that matched the %
in the target pattern. In an explicit rule, there is no stem; so $\*
cannot be determined in that way. Instead, if the target name ends with a recognized suffix (see section Old-Fashioned Suffix Rules), $\*
is set to the target name minus the suffix. For example, if the target name is foo.c
, then $\*
is set to foo
, since .c
is a suffix. GNU make does this bizarre thing only for compatibility with other implementations of make. You should generally avoid using $\*
except in implicit rules or static pattern rules. If the target name in an explicit rule does not end with a recognized suffix, $\*
is set to the empty string for that rule.
$?
is useful even in explicit rules when you wish to operate on only the prerequisites that have changed. For example, suppose that an archive named lib
is supposed to contain copies of several object files. This rule copies just the changed object files into the archive:
lib: foo.o bar.o lose.o win.o ar r lib $?
Of the variables listed above, four have values that are single file names, and three have values that are lists of file names. These seven have variants that get just the file’s directory name or just the file name within the directory. The variant variables’ names are formed by appending D
or F
, respectively. These variants are semi-obsolete in GNU make since the functions dir and notdir can be used to get a similar effect (see section Functions for File Names). Note, however, that the F
variants all omit the trailing slash which always appears in the output of the dir function. Here is a table of the variants:
$(@D)
The directory part of the file name of the target, with the trailing slash removed. If the value of $@
is dir/foo.o
then $(@D)
is dir
. This value is .
if $@
does not contain a slash.
$(@F)
The file-within-directory part of the file name of the target. If the value of $@
is dir/foo.o
then $(@F)
is foo.o
. $(@F)
is equivalent to $(notdir $@)
.
$(\*D)
$(\*F)
The directory part and the file-within-directory part of the stem; dir
and foo
in this example.
$(%D)
$(%F)
The directory part and the file-within-directory part of the target archive member name. This makes sense only for archive member targets of the form archive(member)
and is useful only when member may contain a directory name. (See section Archive Members as Targets.)
$(\<D)
$(\<F)
The directory part and the file-within-directory part of the first prerequisite.
$(^D)
$(^F)
Lists of the directory parts and the file-within-directory parts of all prerequisites.
$(?D)
$(?F)
Lists of the directory parts and the file-within-directory parts of all prerequisites that are newer than the target.
Note that we use a special stylistic convention when we talk about these automatic variables; we write “the value of $\<
”, rather than “the variable <” as we would write for ordinary variables such as objects and CFLAGS. We think this convention looks more natural in this special case. Please do not assume it has a deep significance; $\<
refers to the variable named < just as $(CFLAGS)
refers to the variable named CFLAGS. You could just as well use $(\<)
in place of $\<
.
How Patterns Match
A target pattern is composed of a %
between a prefix and a suffix, either or both of which may be empty. The pattern matches a file name only if the file name starts with the prefix and ends with the suffix, without overlap. The text between the prefix and the suffix is called the stem. Thus, when the pattern %.o
matches the file name test.o
, the stem is test
. The pattern rule prerequisites are turned into actual file names by substituting the stem for the character %
. Thus, if in the same example one of the prerequisites is written as %.c
, it expands to test.c
.
When the target pattern does not contain a slash (and it usually does not), directory names in the file names are removed from the file name before it is compared with the target prefix and suffix. After the comparison of the file name to the target pattern, the directory names, along with the slash that ends them, are added on to the prerequisite file names generated from the pattern rule’s prerequisite patterns and the file name. The directories are ignored only for the purpose of finding an implicit rule to use, not in the application of that rule. Thus, e%t
matches the file name src/eat
, with src/a
as the stem. When prerequisites are turned into file names, the directories from the stem are added at the front, while the rest of the stem is substituted for the %
. The stem src/a
with a prerequisite pattern c%r
gives the file name src/car
.
Match-Anything Pattern Rules
When a pattern rule’s target is just %
, it matches any file name whatever. We call these rules match-anything rules. They are very useful, but it can take a lot of time for make to think about them, because it must consider every such rule for each file name listed either as a target or as a prerequisite.
Suppose the makefile mentions foo.c
. For this target, make would have to consider making it by linking an object file foo.c.o
, or by C compilation-and-linking in one step from foo.c.c
, or by Pascal compilation-and-linking from foo.c.p
, and many other possibilities.
We know these possibilities are ridiculous since foo.c
is a C source file, not an executable. If make did consider these possibilities, it would ultimately reject them, because files such as foo.c.o
and foo.c.p
would not exist. But these possibilities are so numerous that make would run very slowly if it had to consider them.
To gain speed, we have put various constraints on the way make considers match-anything rules. There are two different constraints that can be applied, and each time you define a match-anything rule you must choose one or the other for that rule.
One choice is to mark the match-anything rule as terminal by defining it with a double colon. When a rule is terminal, it does not apply unless its prerequisites actually exist. Prerequisites that could be made with other implicit rules are not good enough. In other words, no further chaining is allowed beyond a terminal rule.
For example, the built-in implicit rules for extracting sources from RCS and SCCS files are terminal; as a result, if the file foo.c,v
does not exist, make will not even consider trying to make it as an intermediate file from foo.c,v.o
or from RCS/SCCS/s.foo.c,v
. RCS and SCCS files are generally ultimate source files, which should not be remade from any other files; therefore, make can save time by not looking for ways to remake them.
If you do not mark the match-anything rule as terminal, then it is nonterminal. A nonterminal match-anything rule cannot apply to a file name that indicates a specific type of data. A file name indicates a specific type of data if some non-match-anything implicit rule target matches it.
For example, the file name foo.c
matches the target for the pattern rule %.c : %.y
(the rule to run Yacc). Regardless of whether this rule is actually applicable (which happens only if there is a file foo.y
), the fact that its target matches is enough to prevent consideration of any nonterminal match-anything rules for the file foo.c
. Thus, make will not even consider trying to make foo.c
as an executable file from foo.c.o
, foo.c.c
, foo.c.p
, etc.
The motivation for this constraint is that nonterminal match-anything rules are used for making files containing specific types of data (such as executable files) and a file name with a recognized suffix indicates some other specific type of data (such as a C source file).
Special built-in dummy pattern rules are provided solely to recognize certain file names so that nonterminal match-anything rules will not be considered. These dummy rules have no prerequisites and no commands, and they are ignored for all other purposes. For example, the built-in implicit rule
%.p :
exists to make sure that Pascal source files such as foo.p
match a specific target pattern and thereby prevent time from being wasted looking for foo.p.o
or foo.p.c
.
Dummy pattern rules such as the one for %.p
are made for every suffix listed as valid for use in suffix rules (see section Old-Fashioned Suffix Rules).
Canceling Implicit Rules
You can override a built-in implicit rule (or one you have defined yourself) by defining a new pattern rule with the same target and prerequisites, but different commands. When the new rule is defined, the built-in one is replaced. The new rule’s position in the sequence of implicit rules is determined by where you write the new rule.
You can cancel a built-in implicit rule by defining a pattern rule with the same target and prerequisites, but no commands. For example, the following would cancel the rule that runs the assembler:
%.o : %.s
Defining Last-Resort Default Rules
You can define a last-resort implicit rule by writing a terminal match-anything pattern rule with no prerequisites (see section Match-Anything Pattern Rules). This is just like any other pattern rule; the only thing special about it is that it will match any target. So such a rule’s commands are used for all targets and prerequisites that have no commands of their own and for which no other implicit rule applies.
For example, when testing a makefile, you might not care if the source files contain real data, only that they exist. Then you might do this:
%:: touch $@
to cause all the source files needed (as prerequisites) to be created automatically.
You can instead define commands to be used for targets for which there are no rules at all, even ones which don’t specify commands. You do this by writing a rule for the target .DEFAULT. Such a rule’s commands are used for all prerequisites which do not appear as targets in any explicit rule, and for which no implicit rule applies. Naturally, there is no .DEFAULT rule unless you write one.
If you use .DEFAULT with no commands or prerequisites:
.DEFAULT:
the commands previously stored for .DEFAULT are cleared. Then make acts as if you had never defined .DEFAULT at all.
If you do not want a target to get the commands from a match-anything pattern rule or .DEFAULT, but you also do not want any commands to be run for the target, you can give it empty commands (see section Using Empty Commands).
You can use a last-resort rule to override part of another makefile. See section Overriding Part of Another Makefile.
Old-Fashioned Suffix Rules
Suffix rules are the old-fashioned way of defining implicit rules for make. Suffix rules are obsolete because pattern rules are more general and clearer. They are supported in GNU make for compatibility with old makefiles. They come in two kinds: double-suffix and single-suffix.
A double-suffix rule is defined by a pair of suffixes: the target suffix and the source suffix. It matches any file whose name ends with the target suffix. The corresponding implicit prerequisite is made by replacing the target suffix with the source suffix in the file name. A two-suffix rule whose target and source suffixes are .o
and .c
is equivalent to the pattern rule %.o : %.c
.
A single-suffix rule is defined by a single suffix, which is the source suffix. It matches any file name, and the corresponding implicit prerequisite name is made by appending the source suffix. A single-suffix rule whose source suffix is .c
is equivalent to the pattern rule % : %.c
.
Suffix rule definitions are recognized by comparing each rule’s target against a defined list of known suffixes. When make sees a rule whose target is a known suffix, this rule is considered a single-suffix rule. When make sees a rule whose target is two known suffixes concatenated, this rule is taken as a double-suffix rule.
For example, .c
and .o
are both on the default list of known suffixes. Therefore, if you define a rule whose target is .c.o
, make takes it to be a double-suffix rule with source suffix .c
and target suffix .o
. Here is the old-fashioned way to define the rule for compiling a C source file:
.c.o: $(CC) -c $(CFLAGS) $(CPPFLAGS) -o $@ $<
Suffix rules cannot have any prerequisites of their own. If they have any, they are treated as normal files with funny names, not as suffix rules. Thus, the rule:
.c.o: foo.h $(CC) -c $(CFLAGS) $(CPPFLAGS) -o $@ $<
tells how to make the file .c.o
from the prerequisite file foo.h
, and is not at all like the pattern rule:
%.o: %.c foo.h $(CC) -c $(CFLAGS) $(CPPFLAGS) -o $@ $<
which tells how to make .o
files from .c
files, and makes all .o
files using this pattern rule also depend on foo.h
.
Suffix rules with no commands are also meaningless. They do not remove previous rules as do pattern rules with no commands (see section Canceling Implicit Rules). They simply enter the suffix or pair of suffixes concatenated as a target in the data base.
The known suffixes are simply the names of the prerequisites of the special target .SUFFIXES. You can add your own suffixes by writing a rule for .SUFFIXES that adds more prerequisites, as in:
.SUFFIXES: .hack .win
which adds .hack
and .win
to the end of the list of suffixes.
If you wish to eliminate the default known suffixes instead of just adding to them, write a rule for .SUFFIXES with no prerequisites. By special dispensation, this eliminates all existing prerequisites of .SUFFIXES. You can then write another rule to add the suffixes you want. For example,
.SUFFIXES: # Delete the default suffixes .SUFFIXES: .c .o .h # Define our suffix list
The -r
or --no-builtin-rules
flag causes the default list of suffixes to be empty.
The variable SUFFIXES is defined to the default list of suffixes before make reads any makefiles. You can change the list of suffixes with a rule for the special target .SUFFIXES, but that does not alter this variable.
Implicit Rule Search Algorithm
Here is the procedure make uses for searching for an implicit rule for a target t. This procedure is followed for each double-colon rule with no commands, for each target of ordinary rules none of which have commands, and for each prerequisite that is not the target of any rule. It is also followed recursively for prerequisites that come from implicit rules, in the search for a chain of rules.
Suffix rules are not mentioned in this algorithm because suffix rules are converted to equivalent pattern rules once the makefiles have been read in.
For an archive member target of the form archive(member)
, the following algorithm is run twice, first using the entire target name t, and second using (member)
as the target t if the first run found no rule.
- Split t into a directory part, called d, and the rest, called n. For example, if t is
src/foo.o
, then d issrc/
and n isfoo.o
. - Make a list of all the pattern rules one of whose targets matches t or n. If the target pattern contains a slash, it is matched against t; otherwise, against n.
- If any rule in that list is not a match-anything rule, then remove all nonterminal match-anything rules from the list.
- Remove from the list all rules with no commands.
- For each pattern rule in the list:
- Find the stem s, which is the nonempty part of t or n matched by the
%
in the target pattern. - Compute the prerequisite names by substituting s for
%
; if the target pattern does not contain a slash, append d to the front of each prerequisite name. - Test whether all the prerequisites exist or ought to exist. (If a file name is mentioned in the makefile as a target or as an explicit prerequisite, then we say it ought to exist.) If all prerequisites exist or ought to exist, or there are no prerequisites, then this rule applies.
- Find the stem s, which is the nonempty part of t or n matched by the
- If no pattern rule has been found so far, try harder. For each pattern rule in the list:
- If the rule is terminal, ignore it and go on to the next rule.
- Compute the prerequisite names as before.
- Test whether all the prerequisites exist or ought to exist.
- For each prerequisite that does not exist, follow this algorithm recursively to see if the prerequisite can be made by an implicit rule.
- If all prerequisites exist, ought to exist, or can be made by implicit rules, then this rule applies.
- If no implicit rule applies, the rule for .DEFAULT, if any, applies. In that case, give t the same commands that .DEFAULT has. Otherwise, there are no commands for t.
Once a rule that applies has been found, for each target pattern of the rule other than the one that matched t or n, the %
in the pattern is replaced with s and the resultant file name is stored until the commands to remake the target file t are executed. After these commands are executed, each of these stored file names are entered into the data base and marked as having been updated and having the same update status as the file t.
When the commands of a pattern rule are executed for t, the automatic variables are set corresponding to the target and prerequisites. See section Automatic Variables.
Using make to Update Archive Files
Archive files are files containing named subfiles called members; they are maintained with the program ar and their main use is as subroutine libraries for linking.
Archive Members as Targets
An individual member of an archive file can be used as a target or prerequisite in make. You specify the member named member in archive file archive as follows:
archive(member)
This construct is available only in targets and prerequisites, not in commands! Most programs that you might use in commands do not support this syntax and cannot act directly on archive members. Only ar and other programs specifically designed to operate on archives can do so. Therefore, valid commands to update an archive member target probably must use ar. For example, this rule says to create a member hack.o
in archive foolib
by copying the file hack.o
:
foolib(hack.o) : hack.o ar cr foolib hack.o
In fact, nearly all archive member targets are updated in just this way and there is an implicit rule to do it for you. Note: The c
flag to ar is required if the archive file does not already exist.
To specify several members in the same archive, you can write all the member names together between the parentheses. For example:
foolib(hack.o kludge.o)
is equivalent to:
foolib(hack.o) foolib(kludge.o)
You can also use shell-style wildcards in an archive member reference. See section Using Wildcard Characters in File Names. For example, foolib(\*.o)
expands to all existing members of the foolib
archive whose names end in .o
; perhaps foolib(hack.o) foolib(kludge.o)
.
Implicit Rule for Archive Member Targets
Recall that a target that looks like a(m)
stands for the member named m in the archive file a.
When make looks for an implicit rule for such a target, as a special feature it considers implicit rules that match (m)
, as well as those that match the actual target a(m)
.
This causes one special rule whose target is (%)
to match. This rule updates the target a(m)
by copying the file m into the archive. For example, it will update the archive member target foo.a(bar.o)
by copying the file bar.o
into the archive foo.a
as a member named bar.o
.
When this rule is chained with others, the result is very powerful. Thus, make "foo.a(bar.o)"
(the quotes are needed to protect the (
and )
from being interpreted specially by the shell) in the presence of a file bar.c
is enough to cause the following commands to be run, even without a makefile:
cc -c bar.c -o bar.oar r foo.a bar.orm -f bar.o
Here make has envisioned the file bar.o
as an intermediate file. See section Chains of Implicit Rules.
Implicit rules such as this one are written using the automatic variable $%
. See section Automatic Variables.
An archive member name in an archive cannot contain a directory name, but it may be useful in a makefile to pretend that it does. If you write an archive member target foo.a(dir/file.o)
, make will perform automatic updating with this command:
ar r foo.a dir/file.o
which has the effect of copying the file dir/file.o
into a member named file.o
. In connection with such usage, the automatic variables %D and %F may be useful.
Updating Archive Symbol Directories
An archive file that is used as a library usually contains a special member named \_\_.SYMDEF
that contains a directory of the external symbol names defined by all the other members. After you update any other members, you need to update \_\_.SYMDEF
so that it will summarize the other members properly. This is done by running the ranlib program:
ranlib archivefile
Normally you would put this command in the rule for the archive file, and make all the members of the archive file prerequisites of that rule. For example,
libfoo.a: libfoo.a(x.o) libfoo.a(y.o) … ranlib libfoo.a
The effect of this is to update archive members x.o
, y.o
, etc., and then update the symbol directory member \_\_.SYMDEF
by running ranlib. The rules for updating the members are not shown here; most likely you can omit them and use the implicit rule which copies files into the archive, as described in the preceding section.
This is not necessary when using the GNU ar program, which updates the \_\_.SYMDEF
member automatically.
Dangers When Using Archives
It is important to be careful when using parallel execution (the -j switch; see section Parallel Execution) and archives. If multiple ar commands run at the same time on the same archive file, they will not know about each other and can corrupt the file.
Possibly a future version of make will provide a mechanism to circumvent this problem by serializing all commands that operate on the same archive file. But for the time being, you must either write your makefiles to avoid this problem in some other way, or not use -j.
Suffix Rules for Archive Files
You can write a special kind of suffix rule for dealing with archive files. See section Old-Fashioned Suffix Rules, for a full explanation of suffix rules. Archive suffix rules are obsolete in GNU make, because pattern rules for archives are a more general mechanism (see section Implicit Rule for Archive Member Targets). But they are retained for compatibility with other makes.
To write a suffix rule for archives, you simply write a suffix rule using the target suffix .a
(the usual suffix for archive files). For example, here is the old-fashioned suffix rule to update a library archive from C source files:
.c.a: $(CC) $(CFLAGS) $(CPPFLAGS) -c $< -o $*.o $(AR) r $@ $*.o $(RM) $*.o
This works just as if you had written the pattern rule:
(%.o): %.c $(CC) $(CFLAGS) $(CPPFLAGS) -c $< -o $*.o $(AR) r $@ $*.o $(RM) $*.o
In fact, this is just what make does when it sees a suffix rule with .a
as the target suffix. Any double-suffix rule .x.a
is converted to a pattern rule with the target pattern (%.o)
and a prerequisite pattern of %.x
.
Since you might want to use .a
as the suffix for some other kind of file, make also converts archive suffix rules to pattern rules in the normal way (see section Old-Fashioned Suffix Rules). Thus a double-suffix rule .x.a
produces two pattern rules: (%.o): %.x
and %.a: %.x
.
Features of GNU make
Here is a summary of the features of GNU make, for comparison with and credit to other versions of make. We consider the features of make in 4.2 BSD systems as a baseline. If you are concerned with writing portable makefiles, you should not use the features of make listed here, nor the ones in section Incompatibilities and Missing Features.
Many features come from the version of make in System V.
- The VPATH variable and its special meaning. See section Searching Directories for Prerequisites. This feature exists in System V make, but is undocumented. It is documented in 4.3 BSD make (which says it mimics System V’s VPATH feature).
- Included makefiles. See section Including Other Makefiles. Allowing multiple files to be included with a single directive is a GNU extension.
- Variables are read from and communicated via the environment. See section Variables from the Environment.
- Options passed through the variable MAKEFLAGS to recursive invocations of make. See section Communicating Options to a Sub-make.
- The automatic variable $% is set to the member name in an archive reference. See section Automatic Variables.
- The automatic variables $@, $*, $<, $%, and $? have corresponding forms like $(@F) and $(@D). We have generalized this to $^ as an obvious extension. See section Automatic Variables.
- Substitution variable references. See section Basics of Variable References.
- The command-line options
-b
and-m
, accepted and ignored. In System V make, these options actually do something. - Execution of recursive commands to run make via the variable MAKE even if
-n
,-q
or-t
is specified. See section Recursive Use of make. - Support for suffix
.a
in suffix rules. See section Suffix Rules for Archive Files. This feature is obsolete in GNU make, because the general feature of rule chaining (see section Chains of Implicit Rules) allows one pattern rule for installing members in an archive (see section Implicit Rule for Archive Member Targets) to be sufficient. - The arrangement of lines and backslash-newline combinations in commands is retained when the commands are printed, so they appear as they do in the makefile, except for the stripping of initial whitespace.
The following features were inspired by various other versions of make. In some cases it is unclear exactly which versions inspired which others.
- Pattern rules using
%
. This has been implemented in several versions of make. We’re not sure who invented it first, but it’s been spread around a bit. See section Defining and Redefining Pattern Rules. - Rule chaining and implicit intermediate files. This was implemented by Stu Feldman in his version of make for AT&T Eighth Edition Research Unix, and later by Andrew Hume of AT&T Bell Labs in his mk program (where he terms it “transitive closure”). We do not really know if we got this from either of them or thought it up ourselves at the same time. See section Chains of Implicit Rules.
- The automatic variable $^ containing a list of all prerequisites of the current target. We did not invent this, but we have no idea who did. See section Automatic Variables. The automatic variable $+ is a simple extension of $^.
- The “what if” flag (
-W
in GNU make) was (as far as we know) invented by Andrew Hume in mk. See section Instead of Executing the Commands. - The concept of doing several things at once (parallelism) exists in many incarnations of make and similar programs, though not in the System V or BSD implementations. See section Command Execution.
- Modified variable references using pattern substitution come from SunOS 4. See section Basics of Variable References. This functionality was provided in GNU make by the patsubst function before the alternate syntax was implemented for compatibility with SunOS 4. It is not altogether clear who inspired whom, since GNU make had patsubst before SunOS 4 was released.
- The special significance of
+
characters preceding command lines (see section Instead of Executing the Commands) is mandated by IEEE Standard 1003.2-1992 (POSIX.2). - The
+=
syntax to append to the value of a variable comes from SunOS 4 make. See section Appending More Text to Variables. - The syntax
archive(mem1 mem2...)
to list multiple members in a single archive file comes from SunOS 4 make. See section Archive Members as Targets. - The -include directive to include makefiles with no error for a nonexistent file comes from SunOS 4 make. (But note that SunOS 4 make does not allow multiple makefiles to be specified in one -include directive.) The same feature appears with the name sinclude in SGI make and perhaps others.
The remaining features are inventions new in GNU make:
- Use the
-v
or--version
option to print version and copyright information. - Use the
-h
or--help
option to summarize the options to make. - Simply-expanded variables. See section The Two Flavors of Variables.
- Pass command-line variable assignments automatically through the variable MAKE to recursive make invocations. See section Recursive Use of make.
- Use the
-C
or--directory
command option to change directory. See section Summary of Options. - Make verbatim variable definitions with define. See section Defining Variables Verbatim.
- Declare phony targets with the special target .PHONY. Andrew Hume of AT&T Bell Labs implemented a similar feature with a different syntax in his mk program. This seems to be a case of parallel discovery. See section Phony Targets.
- Manipulate text by calling functions. See section Functions for Transforming Text.
- Use the
-o
or--old-file
option to pretend a file’s modification-time is old. See section Avoiding Recompilation of Some Files. - Conditional execution. This feature has been implemented numerous times in various versions of make; it seems a natural extension derived from the features of the C preprocessor and similar macro languages and is not a revolutionary concept. See section Conditional Parts of Makefiles.
- Specify a search path for included makefiles. See section Including Other Makefiles.
- Specify extra makefiles to read with an environment variable. See section The Variable MAKEFILES.
- Strip leading sequences of
./
from file names, so that./file
andfile
are considered to be the same file. - Use a special search method for library prerequisites written in the form
-lname
. See section Directory Search for Link Libraries. - Allow suffixes for suffix rules (see section Old-Fashioned Suffix Rules) to contain any characters. In other versions of make, they must begin with
.
and not contain any/
characters. - Keep track of the current level of make recursion using the variable MAKELEVEL. See section Recursive Use of make.
- Provide any goals given on the command line in the variable MAKECMDGOALS. See section Arguments to Specify the Goals.
- Specify static pattern rules. See section Static Pattern Rules.
- Provide selective vpath search. See section Searching Directories for Prerequisites.
- Provide computed variable references. See section Basics of Variable References.
- Update makefiles. See section How Makefiles Are Remade. System V make has a very, very limited form of this functionality in that it will check out SCCS files for makefiles.
- Various new built-in implicit rules. See section Catalogue of Implicit Rules.
- The built-in variable
MAKE\_VERSION
gives the version number of make.
Incompatibilities and Missing Features
The make programs in various other systems support a few features that are not implemented in GNU make. The POSIX.2 standard (IEEE Standard 1003.2-1992) which specifies make does not require any of these features.
-
A target of the form
file((entry))
stands for a member of archive file file. The member is chosen, not by name, but by being an object file which defines the linker symbol entry. This feature was not put into GNU make because of the nonmodularity of putting knowledge into make of the internal format of archive file symbol tables. See section Updating Archive Symbol Directories. -
Suffixes (used in suffix rules) that end with the character
~
have a special meaning to System V make; they refer to the SCCS file that corresponds to the file one would get without the~
. For example, the suffix rule.c~.o
would make the filen.o
from the SCCS files.n.c
. For complete coverage, a whole series of such suffix rules is required. See section Old-Fashioned Suffix Rules. In GNU make, this entire series of cases is handled by two pattern rules for extraction from SCCS, in combination with the general feature of rule chaining. See section Chains of Implicit Rules. -
In System V make, the string
$@
has the strange meaning that, in the prerequisites of a rule with multiple targets, it stands for the particular target that is being processed. This is not defined in GNU make because$
should always stand for an ordinary$
. It is possible to get portions of this functionality through the use of static pattern rules (see section Static Pattern Rules). The System V make rule:$(targets): $$@.o lib.a
can be replaced with the GNU make static pattern rule:
$(targets): %: %.o lib.a
-
In System V and 4.3 BSD make, files found by VPATH search (see section Searching Directories for Prerequisites) have their names changed inside command strings. We feel it is much cleaner to always use automatic variables and thus make this feature obsolete.
-
In some Unix makes, the automatic variable $* appearing in the prerequisites of a rule has the amazingly strange “feature” of expanding to the full name of the target of that rule. We cannot imagine what went on in the minds of Unix make developers to do this; it is utterly inconsistent with the normal definition of $*.
-
In some Unix makes, implicit rule search (see section Using Implicit Rules) is apparently done for all targets, not just those without commands. This means you can do:
foo.o: cc -c foo.c
and Unix make will intuit that
foo.o
depends onfoo.c
. We feel that such usage is broken. The prerequisite properties of make are well-defined (for GNU make, at least), and doing such a thing simply does not fit the model. -
GNU make does not include any built-in implicit rules for compiling or preprocessing EFL programs. If we hear of anyone who is using EFL, we will gladly add them.
-
It appears that in SVR4 make, a suffix rule can be specified with no commands, and it is treated as if it had empty commands (see section Using Empty Commands). For example:
.c.a:
will override the built-in
.c.a
suffix rule. We feel that it is cleaner for a rule without commands to always simply add to the prerequisite list for the target. The above example can be easily rewritten to get the desired behavior in GNU make:.c.a: ;
-
Some versions of make invoke the shell with the
-e
flag, except under-k
(see section Testing the Compilation of a Program). The-e
flag tells the shell to exit as soon as any program it runs returns a nonzero status. We feel it is cleaner to write each shell command line to stand on its own and not require this special treatment.
This chapter describes conventions for writing the Makefiles for GNU programs. Using Automake will help you write a Makefile that follows these conventions.
General Conventions for Makefiles
Every Makefile should contain this line:
SHELL = /bin/sh
to avoid trouble on systems where the SHELL variable might be inherited from the environment. (This is never a problem with GNU make.)
Different make programs have incompatible suffix lists and implicit rules, and this sometimes creates confusion or misbehavior. So it is a good idea to set the suffix list explicitly using only the suffixes you need in the particular Makefile, like this:
.SUFFIXES:.SUFFIXES: .c .o
The first line clears out the suffix list, the second introduces all suffixes which may be subject to implicit rules in this Makefile.
Don’t assume that .
is in the path for command execution. When you need to run programs that are a part of your package during the make, please make sure that it uses ./
if the program is built as part of the make or $(srcdir)/
if the file is an unchanging part of the source code. Without one of these prefixes, the current search path is used.
The distinction between ./
(the build directory) and $(srcdir)/
(the source directory) is important because users can build in a separate directory using the --srcdir
option to configure
. A rule of the form:
foo.1 : foo.man sedscript sed -e sedscript foo.man > foo.1
will fail when the build directory is not the source directory, because foo.man
and sedscript
are in the the source directory.
When using GNU make, relying on VPATH
to find the source file will work in the case where there is a single dependency file, since the make automatic variable $\<
will represent the source file wherever it is. (Many versions of make set $\<
only in implicit rules.) A Makefile target like
foo.o : bar.c $(CC) -I. -I$(srcdir) $(CFLAGS) -c bar.c -o foo.o
should instead be written as
foo.o : bar.c $(CC) -I. -I$(srcdir) $(CFLAGS) -c $< -o $@
in order to allow VPATH
to work correctly. When the target has multiple dependencies, using an explicit $(srcdir)
is the easiest way to make the rule work well. For example, the target above for foo.1
is best written as:
foo.1 : foo.man sedscript sed -e $(srcdir)/sedscript $(srcdir)/foo.man > $@
GNU distributions usually contain some files which are not source files–for example, Info files, and the output from Autoconf, Automake, Bison or Flex. Since these files normally appear in the source directory, they should always appear in the source directory, not in the build directory. So Makefile rules to update them should put the updated files in the source directory.
However, if a file does not appear in the distribution, then the Makefile should not put it in the source directory, because building a program in ordinary circumstances should not modify the source directory in any way.
Try to make the build and installation targets, at least (and all their subtargets) work correctly with a parallel make.
Utilities in Makefiles
Write the Makefile commands (and any shell scripts, such as configure) to run in sh, not in csh. Don’t use any special features of ksh or bash.
The configure script and the Makefile rules for building and installation should not use any utilities directly except these:
cat cmp cp diff echo egrep expr false grep install-info ln ls mkdir mv pwd rm rmdir sed sleep sort tar test touch true
The compression program gzip can be used in the dist rule.
Stick to the generally supported options for these programs. For example, don’t use mkdir -p
, convenient as it may be, because most systems don’t support it.
It is a good idea to avoid creating symbolic links in makefiles, since a few systems don’t support them.
The Makefile rules for building and installation can also use compilers and related programs, but should do so via make variables so that the user can substitute alternatives. Here are some of the programs we mean:
ar bison cc flex install ld ldconfig lexmake makeinfo ranlib texi2dvi yacc
Use the following make variables to run those programs:
$(AR) $(BISON) $(CC) $(FLEX) $(INSTALL) $(LD) $(LDCONFIG) $(LEX) $(MAKE) $(MAKEINFO) $(RANLIB) $(TEXI2DVI) $(YACC)
When you use ranlib or ldconfig, you should make sure nothing bad happens if the system does not have the program in question. Arrange to ignore an error from that command, and print a message before the command to tell the user that failure of this command does not mean a problem. (The Autoconf AC\_PROG\_RANLIB
macro can help with this.)
If you use symbolic links, you should implement a fallback for systems that don’t have symbolic links.
Additional utilities that can be used via Make variables are:
chgrp chmod chown mknod
It is ok to use other utilities in Makefile portions (or scripts) intended only for particular systems where you know those utilities exist.
Variables for Specifying Commands
Makefiles should provide variables for overriding certain commands, options, and so on.
In particular, you should run most utility programs via variables. Thus, if you use Bison, have a variable named BISON whose default value is set with BISON = bison
, and refer to it with $(BISON) whenever you need to use Bison.
File management utilities such as ln, rm, mv, and so on, need not be referred to through variables in this way, since users don’t need to replace them with other programs.
Each program-name variable should come with an options variable that is used to supply options to the program. Append FLAGS
to the program-name variable name to get the options variable name–for example, BISONFLAGS. (The names CFLAGS for the C compiler, YFLAGS for yacc, and LFLAGS for lex, are exceptions to this rule, but we keep them because they are standard.) Use CPPFLAGS in any compilation command that runs the preprocessor, and use LDFLAGS in any compilation command that does linking as well as in any direct use of ld.
If there are C compiler options that must be used for proper compilation of certain files, do not include them in CFLAGS. Users expect to be able to specify CFLAGS freely themselves. Instead, arrange to pass the necessary options to the C compiler independently of CFLAGS, by writing them explicitly in the compilation commands or by defining an implicit rule, like this:
CFLAGS = -gALL_CFLAGS = -I. $(CFLAGS).c.o: $(CC) -c $(CPPFLAGS) $(ALL_CFLAGS) $<
Do include the -g
option in CFLAGS, because that is not required for proper compilation. You can consider it a default that is only recommended. If the package is set up so that it is compiled with GCC by default, then you might as well include -O
in the default value of CFLAGS as well.
Put CFLAGS last in the compilation command, after other variables containing compiler options, so the user can use CFLAGS to override the others.
CFLAGS should be used in every invocation of the C compiler, both those which do compilation and those which do linking.
Every Makefile should define the variable INSTALL, which is the basic command for installing a file into the system.
Every Makefile should also define the variables INSTALL_PROGRAM and INSTALL_DATA. (The default for each of these should be $(INSTALL).) Then it should use those variables as the commands for actual installation, for executables and nonexecutables respectively. Use these variables as follows:
$(INSTALL_PROGRAM) foo $(bindir)/foo$(INSTALL_DATA) libfoo.a $(libdir)/libfoo.a
Optionally, you may prepend the value of DESTDIR to the target filename. Doing this allows the installer to create a snapshot of the installation to be copied onto the real target filesystem later. Do not set the value of DESTDIR in your Makefile, and do not include it in any installed files. With support for DESTDIR, the above examples become:
$(INSTALL_PROGRAM) foo $(DESTDIR)$(bindir)/foo $(INSTALL_DATA) libfoo.a $(DESTDIR)$(libdir)/libfoo.a
Always use a file name, not a directory name, as the second argument of the installation commands. Use a separate command for each file to be installed.
Variables for Installation Directories
Installation directories should always be named by variables, so it is easy to install in a nonstandard place. The standard names for these variables are described below. They are based on a standard filesystem layout; variants of it are used in SVR4, 4.4BSD, Linux, Ultrix v4, and other modern operating systems.
These two variables set the root for the installation. All the other installation directories should be subdirectories of one of these two, and nothing should be directly installed into these two directories.
prefix
A prefix used in constructing the default values of the variables listed below. The default value of prefix should be /usr/local
. When building the complete GNU system, the prefix will be empty and /usr
will be a symbolic link to /
. (If you are using Autoconf, write it as @prefix@
.) Running make install
with a different value of prefix from the one used to build the program should not recompile the program.
exec\_prefix
A prefix used in constructing the default values of some of the variables listed below. The default value of exec_prefix should be $(prefix). (If you are using Autoconf, write it as @exec\_prefix@
.) Generally, $(exec_prefix) is used for directories that contain machine-specific files (such as executables and subroutine libraries), while $(prefix) is used directly for other directories. Running make install
with a different value of exec_prefix from the one used to build the program should not recompile the program.
Executable programs are installed in one of the following directories.
bindir
The directory for installing executable programs that users can run. This should normally be /usr/local/bin
, but write it as $(exec\_prefix)/bin
. (If you are using Autoconf, write it as @bindir@
.)
sbindir
The directory for installing executable programs that can be run from the shell, but are only generally useful to system administrators. This should normally be /usr/local/sbin
, but write it as $(exec\_prefix)/sbin
. (If you are using Autoconf, write it as @sbindir@
.)
libexecdir
The directory for installing executable programs to be run by other programs rather than by users. This directory should normally be /usr/local/libexec
, but write it as $(exec\_prefix)/libexec
. (If you are using Autoconf, write it as @libexecdir@
.)
Data files used by the program during its execution are divided into categories in two ways.
- Some files are normally modified by programs; others are never normally modified (though users may edit some of these).
- Some files are architecture-independent and can be shared by all machines at a site; some are architecture-dependent and can be shared only by machines of the same kind and operating system; others may never be shared between two machines.
This makes for six different possibilities. However, we want to discourage the use of architecture-dependent files, aside from object files and libraries. It is much cleaner to make other data files architecture-independent, and it is generally not hard.
Therefore, here are the variables Makefiles should use to specify directories:
datadir
The directory for installing read-only architecture independent data files. This should normally be /usr/local/share
, but write it as $(prefix)/share
. (If you are using Autoconf, write it as @datadir@
.) As a special exception, see $(infodir)
and $(includedir)
below.
sysconfdir
The directory for installing read-only data files that pertain to a single machine–that is to say, files for configuring a host. Mailer and network configuration files, /etc/passwd
, and so forth belong here. All the files in this directory should be ordinary ASCII text files. This directory should normally be /usr/local/etc
, but write it as $(prefix)/etc
. (If you are using Autoconf, write it as @sysconfdir@
.) Do not install executables here in this directory (they probably belong in $(libexecdir)
or $(sbindir)
). Also do not install files that are modified in the normal course of their use (programs whose purpose is to change the configuration of the system excluded). Those probably belong in $(localstatedir)
.
sharedstatedir
The directory for installing architecture-independent data files which the programs modify while they run. This should normally be /usr/local/com
, but write it as $(prefix)/com
. (If you are using Autoconf, write it as @sharedstatedir@
.)
localstatedir
The directory for installing data files which the programs modify while they run, and that pertain to one specific machine. Users should never need to modify files in this directory to configure the package’s operation; put such configuration information in separate files that go in $(datadir)
or $(sysconfdir)
. $(localstatedir)
should normally be /usr/local/var
, but write it as $(prefix)/var
. (If you are using Autoconf, write it as @localstatedir@
.)
libdir
The directory for object files and libraries of object code. Do not install executables here, they probably ought to go in $(libexecdir)
instead. The value of libdir should normally be /usr/local/lib
, but write it as $(exec\_prefix)/lib
. (If you are using Autoconf, write it as @libdir@
.)
infodir
The directory for installing the Info files for this package. By default, it should be /usr/local/info
, but it should be written as $(prefix)/info
. (If you are using Autoconf, write it as @infodir@
.)
lispdir
The directory for installing any Emacs Lisp files in this package. By default, it should be /usr/local/share/emacs/site-lisp
, but it should be written as $(prefix)/share/emacs/site-lisp
. If you are using Autoconf, write the default as @lispdir@
. In order to make @lispdir@
work, you need the following lines in your configure.in
file:
lispdir=’${datadir}/emacs/site-lisp’AC_SUBST(lispdir)
includedir
The directory for installing header files to be included by user programs with the C #include
preprocessor directive. This should normally be /usr/local/include
, but write it as $(prefix)/include
. (If you are using Autoconf, write it as @includedir@
.) Most compilers other than GCC do not look for header files in directory /usr/local/include
. So installing the header files this way is only useful with GCC. Sometimes this is not a problem because some libraries are only really intended to work with GCC. But some libraries are intended to work with other compilers. They should install their header files in two places, one specified by includedir and one specified by oldincludedir.
oldincludedir
The directory for installing #include
header files for use with compilers other than GCC. This should normally be /usr/include
. (If you are using Autoconf, you can write it as @oldincludedir@
.) The Makefile commands should check whether the value of oldincludedir is empty. If it is, they should not try to use it; they should cancel the second installation of the header files. A package should not replace an existing header in this directory unless the header came from the same package. Thus, if your Foo package provides a header file foo.h
, then it should install the header file in the oldincludedir directory if either (1) there is no foo.h
there or (2) the foo.h
that exists came from the Foo package. To tell whether foo.h
came from the Foo package, put a magic string in the file–part of a comment–and grep for that string.
Unix-style man pages are installed in one of the following:
mandir
The top-level directory for installing the man pages (if any) for this package. It will normally be /usr/local/man
, but you should write it as $(prefix)/man
. (If you are using Autoconf, write it as @mandir@
.)
man1dir
The directory for installing section 1 man pages. Write it as $(mandir)/man1
.
man2dir
The directory for installing section 2 man pages. Write it as $(mandir)/man2
...
Don’t make the primary documentation for any GNU software be a man page. Write a manual in Texinfo instead. Man pages are just for the sake of people running GNU software on Unix, which is a secondary application only.
manext
The file name extension for the installed man page. This should contain a period followed by the appropriate digit; it should normally be .1
.
man1ext
The file name extension for installed section 1 man pages.
man2ext
The file name extension for installed section 2 man pages.
...
Use these names instead of manext
if the package needs to install man pages in more than one section of the manual.
And finally, you should set the following variable:
srcdir
The directory for the sources being compiled. The value of this variable is normally inserted by the configure shell script. (If you are using Autconf, use srcdir = @srcdir@
.)
For example:
# Common prefix for installation directories.
# NOTE: This directory must exist when you start the install.prefix = /usr/local exec_prefix = $(prefix)
# Where to put the executable for the command `gcc’. bindir = $(exec_prefix)/bin
# Where to put the directories used by the compiler. libexecdir = $(exec_prefix)/libexec
# Where to put the Info files. infodir = $(prefix)/info
If your program installs a large number of files into one of the standard user-specified directories, it might be useful to group them into a subdirectory particular to that program. If you do this, you should write the install rule to create these subdirectories.
Do not expect the user to include the subdirectory name in the value of any of the variables listed above. The idea of having a uniform set of variable names for installation directories is to enable the user to specify the exact same values for several different GNU packages. In order for this to be useful, all the packages must be designed so that they will work sensibly when the user does so.
Standard Targets for Users
All GNU programs should have the following targets in their Makefiles:
all
Compile the entire program. This should be the default target. This target need not rebuild any documentation files; Info files should normally be included in the distribution, and DVI files should be made only when explicitly asked for. By default, the Make rules should compile and link with -g
, so that executable programs have debugging symbols. Users who don’t mind being helpless can strip the executables later if they wish.
install
Compile the program and copy the executables, libraries, and so on to the file names where they should reside for actual use. If there is a simple test to verify that a program is properly installed, this target should run that test. Do not strip executables when installing them. Devil-may-care users can use the install-strip target to do that. If possible, write the install target rule so that it does not modify anything in the directory where the program was built, provided make all
has just been done. This is convenient for building the program under one user name and installing it under another. The commands should create all the directories in which files are to be installed, if they don’t already exist. This includes the directories specified as the values of the variables prefix and exec_prefix, as well as all subdirectories that are needed. One way to do this is by means of an installdirs target as described below. Use -
before any command for installing a man page, so that make will ignore any errors. This is in case there are systems that don’t have the Unix man page documentation system installed. The way to install Info files is to copy them into $(infodir)
with $(INSTALL_DATA) (see section Variables for Specifying Commands), and then run the install-info program if it is present. install-info is a program that edits the Info dir
file to add or update the menu entry for the given Info file; it is part of the Texinfo package. Here is a sample rule to install an Info file:
$(DESTDIR)$(infodir)/foo.info: foo.info $(POST_INSTALL)
# There may be a newer info file in . than in srcdir. -if test -f foo.info; then d=.; \ else d=$(srcdir); fi; \ $(INSTALL_DATA) $$d/foo.info $(DESTDIR)$@; \
# Run install-info only if it exists.
# Use `if’ instead of just prepending `-’ to the
# line so we notice real errors from install-info.
# We use `$(SHELL) -c’ because some shells do not
# fail gracefully when there is an unknown command. if $(SHELL) -c ‘install-info –version’ \ >/dev/null 2>&1; then \ install-info –dir-file=$(DESTDIR)$(infodir)/dir \ $(DESTDIR)$(infodir)/foo.info; \ else true; fi
When writing the install target, you must classify all the commands into three categories: normal ones, pre-installation commands and post-installation commands. See section Install Command Categories.
uninstall
Delete all the installed files–the copies that the install
target creates. This rule should not modify the directories where compilation is done, only the directories where files are installed. The uninstallation commands are divided into three categories, just like the installation commands. See section Install Command Categories.
install-strip
Like install, but strip the executable files while installing them. In many cases, the definition of this target can be very simple:
install-strip: $(MAKE) INSTALL_PROGRAM=’$(INSTALL_PROGRAM) -s’ \ install
Normally we do not recommend stripping an executable unless you are sure the program has no bugs. However, it can be reasonable to install a stripped executable for actual execution while saving the unstripped executable elsewhere in case there is a bug.
clean
Delete all files from the current directory that are normally created by building the program. Don’t delete the files that record the configuration. Also preserve files that could be made by building, but normally aren’t because the distribution comes with them. Delete .dvi
files here if they are not part of the distribution.
distclean
Delete all files from the current directory that are created by configuring or building the program. If you have unpacked the source and built the program without creating any other files, make distclean
should leave only the files that were in the distribution.
mostlyclean
Like clean
, but may refrain from deleting a few files that people normally don’t want to recompile. For example, the mostlyclean
target for GCC does not delete libgcc.a
, because recompiling it is rarely necessary and takes a lot of time.
maintainer-clean
Delete almost everything from the current directory that can be reconstructed with this Makefile. This typically includes everything deleted by distclean, plus more: C source files produced by Bison, tags tables, Info files, and so on. The reason we say “almost everything” is that running the command make maintainer-clean
should not delete configure
even if configure
can be remade using a rule in the Makefile. More generally, make maintainer-clean
should not delete anything that needs to exist in order to run configure
and then begin to build the program. This is the only exception; maintainer-clean should delete everything else that can be rebuilt. The maintainer-clean
target is intended to be used by a maintainer of the package, not by ordinary users. You may need special tools to reconstruct some of the files that make maintainer-clean
deletes. Since these files are normally included in the distribution, we don’t take care to make them easy to reconstruct. If you find you need to unpack the full distribution again, don’t blame us. To help make users aware of this, the commands for the special maintainer-clean target should start with these two:
@echo ‘This command is intended for maintainers to use; it’ @echo ‘deletes files that may need special tools to rebuild.’
TAGS
Update a tags table for this program.
info
Generate any Info files needed. The best way to write the rules is as follows:
info: foo.infofoo.info: foo.texi chap1.texi chap2.texi $(MAKEINFO) $(srcdir)/foo.texi
You must define the variable MAKEINFO in the Makefile. It should run the makeinfo program, which is part of the Texinfo distribution. Normally a GNU distribution comes with Info files, and that means the Info files are present in the source directory. Therefore, the Make rule for an info file should update it in the source directory. When users build the package, ordinarily Make will not update the Info files because they will already be up to date.
dvi
Generate DVI files for all Texinfo documentation. For example:
dvi: foo.dvifoo.dvi: foo.texi chap1.texi chap2.texi $(TEXI2DVI) $(srcdir)/foo.texi
You must define the variable TEXI2DVI in the Makefile. It should run the program texi2dvi, which is part of the Texinfo distribution. (3)Alternatively, write just the dependencies, and allow GNU make to provide the command.
dist
Create a distribution tar file for this program. The tar file should be set up so that the file names in the tar file start with a subdirectory name which is the name of the package it is a distribution for. This name can include the version number. For example, the distribution tar file of GCC version 1.40 unpacks into a subdirectory named gcc-1.40
. The easiest way to do this is to create a subdirectory appropriately named, use ln or cp to install the proper files in it, and then tar that subdirectory. Compress the tar file file with gzip. For example, the actual distribution file for GCC version 1.40 is called gcc-1.40.tar.gz
. The dist target should explicitly depend on all non-source files that are in the distribution, to make sure they are up to date in the distribution. See section `Making Releases’ in GNU Coding Standards.
check
Perform self-tests (if any). The user must build the program before running the tests, but need not install the program; you should write the self-tests so that they work when the program is built but not installed.
The following targets are suggested as conventional names, for programs in which they are useful.
installcheck
Perform installation tests (if any). The user must build and install the program before running the tests. You should not assume that $(bindir)
is in the search path.
installdirs
It’s useful to add a target named installdirs
to create the directories where files are installed, and their parent directories. There is a script called mkinstalldirs
which is convenient for this; you can find it in the Texinfo package. You can use a rule like this:
# Make sure all installation directories (e.g. $(bindir))
# actually exist by making them if necessary.installdirs: mkinstalldirs $(srcdir)/mkinstalldirs $(bindir) $(datadir) \ $(libdir) $(infodir) \ $(mandir)
This rule should not modify the directories where compilation is done. It should do nothing but create installation directories.
Install Command Categories
When writing the install target, you must classify all the commands into three categories: normal ones, pre-installation commands and post-installation commands.
Normal commands move files into their proper places, and set their modes. They may not alter any files except the ones that come entirely from the package they belong to.
Pre-installation and post-installation commands may alter other files; in particular, they can edit global configuration files or data bases.
Pre-installation commands are typically executed before the normal commands, and post-installation commands are typically run after the normal commands.
The most common use for a post-installation command is to run install-info. This cannot be done with a normal command, since it alters a file (the Info directory) which does not come entirely and solely from the package being installed. It is a post-installation command because it needs to be done after the normal command which installs the package’s Info files.
Most programs don’t need any pre-installation commands, but we have the feature just in case it is needed.
To classify the commands in the install rule into these three categories, insert category lines among them. A category line specifies the category for the commands that follow.
A category line consists of a tab and a reference to a special Make variable, plus an optional comment at the end. There are three variables you can use, one for each category; the variable name specifies the category. Category lines are no-ops in ordinary execution because these three Make variables are normally undefined (and you should not define them in the makefile).
Here are the three possible category lines, each with a comment that explains what it means:
$(PRE_INSTALL)
# Pre-install commands follow. $(POST_INSTALL) # Post-install commands follow. $(NORMAL_INSTALL)
# Normal commands follow.
If you don’t use a category line at the beginning of the install rule, all the commands are classified as normal until the first category line. If you don’t use any category lines, all the commands are classified as normal.
These are the category lines for uninstall:
$(PRE_UNINSTALL) # Pre-uninstall commands follow. $(POST_UNINSTALL)
# Post-uninstall commands follow. $(NORMAL_UNINSTALL)
# Normal commands follow.
Typically, a pre-uninstall command would be used for deleting entries from the Info directory.
If the install or uninstall target has any dependencies which act as subroutines of installation, then you should start each dependency’s commands with a category line, and start the main target’s commands with a category line also. This way, you can ensure that each command is placed in the right category regardless of which of the dependencies actually run.
Pre-installation and post-installation commands should not run any programs except for these:
[ basename bash cat chgrp chmod chown cmp cp dd diff echo egrep expand expr false fgrep find getopt grep gunzip gzip hostname install install-info kill ldconfig ln ls md5sum mkdir mkfifo mknod mv printenv pwd rm rmdir sed sort tee test touch true uname xargs yes
The reason for distinguishing the commands in this way is for the sake of making binary packages. Typically a binary package contains all the executables and other files that need to be installed, and has its own method of installing them–so it does not need to run the normal installation commands. But installing the binary package does need to execute the pre-installation and post-installation commands.
Programs to build binary packages work by extracting the pre-installation and post-installation commands. Here is one way of extracting the pre-installation commands:
make -n install -o all \ PRE_INSTALL=pre-install \ POST_INSTALL=post-install \ NORMAL_INSTALL=normal-install \ | gawk -f pre-install.awk
where the file pre-install.awk
could contain this:
$0 ~ /^\t[ \t]*(normal_install|post_install)[ \t]*$/ {on = 0}on {print $0} $0 ~ /^\t[ \t]*pre_install[ \t]*$/ {on = 1}
The resulting file of pre-installation commands is executed as a shell script as part of installing the binary package.
Quick Reference
This appendix summarizes the directives, text manipulation functions, and special variables which GNU make understands. See section Special Built-in Target Names, section Catalogue of Implicit Rules, and section Summary of Options, for other summaries.
Here is a summary of the directives GNU make recognizes:
define variable
endef
Define a multi-line, recursively-expanded variable.
See section Defining Canned Command Sequences.
ifdef variable
ifndef variable
ifeq (a,b)
ifeq “a” “b”
ifeq ‘a’ ‘b’
ifneq (a,b)
ifneq “a” “b”
ifneq ‘a’ ‘b’
else
endif
Conditionally evaluate part of the makefile.
See section Conditional Parts of Makefiles.
include file
-include file
sinclude file
Include another makefile.
See section Including Other Makefiles.
override variable = value
override variable := value
override variable += value
override variable ?= value
override define variable
endef
Define a variable, overriding any previous definition, even one from the command line.
See section The override Directive.
export
Tell make to export all variables to child processes by default.
See section Communicating Variables to a Sub-make.
export variable
export variable = value
export variable := value
export variable += value
export variable ?= value
unexport variable
Tell make whether or not to export a particular variable to child processes.
See section Communicating Variables to a Sub-make.
vpath pattern path
Specify a search path for files matching a %
pattern.
See section The vpath Directive.
vpath pattern
Remove all search paths previously specified for pattern.
vpath
Remove all search paths previously specified in any vpath directive.
Here is a summary of the text manipulation functions (see section Functions for Transforming Text):
$(subst from,to,text)
Replace from with to in text.
See section Functions for String Substitution and Analysis.
$(patsubst pattern,replacement,text)
Replace words matching pattern with replacement in text.
See section Functions for String Substitution and Analysis.
$(strip string)
Remove excess whitespace characters from string.
See section Functions for String Substitution and Analysis.
$(findstring find,text)
Locate find in text.
See section Functions for String Substitution and Analysis.
$(filter pattern…,text)
Select words in text that match one of the pattern words.
See section Functions for String Substitution and Analysis.
$(filter-out pattern…,text)
Select words in text that do not match any of the pattern words.
See section Functions for String Substitution and Analysis.
$(sort list)
Sort the words in list lexicographically, removing duplicates.
See section Functions for String Substitution and Analysis.
$(dir names…)
Extract the directory part of each file name.
See section Functions for File Names.
$(notdir names…)
Extract the non-directory part of each file name.
See section Functions for File Names.
$(suffix names…)
Extract the suffix (the last .
and following characters) of each file name.
See section Functions for File Names.
$(basename names…)
Extract the base name (name without suffix) of each file name.
See section Functions for File Names.
$(addsuffix suffix,names…)
Append suffix to each word in names.
See section Functions for File Names.
$(addprefix prefix,names…)
Prepend prefix to each word in names.
See section Functions for File Names.
$(join list1,list2)
Join two parallel lists of words.
See section Functions for File Names.
$(word n,text)
Extract the nth word (one-origin) of text.
See section Functions for File Names.
$(words text)
Count the number of words in text.
See section Functions for File Names.
$(wordlist s,e,text)
Returns the list of words in text from s to e.
See section Functions for File Names.
$(firstword names…)
Extract the first word of names.
See section Functions for File Names.
$(wildcard pattern…)
Find file names matching a shell file name pattern (not a %
pattern).
See section The Function wildcard.
$(error text…)
When this function is evaluated, make generates a fatal error with the message text.
See section Functions That Control Make.
$(warning text…)
When this function is evaluated, make generates a warning with the message text.
See section Functions That Control Make.
$(shell command)
Execute a shell command and return its output.
See section The shell Function.
$(origin variable)
Return a string describing how the make variable variable was defined.
See section The origin Function.
$(foreach var,words,text)
Evaluate text with var bound to each word in words, and concatenate the results.
See section The foreach Function.
$(call var,param,…)
Evaluate the variable var replacing any references to $(1), $(2) with the first, second, etc. param values.
See section The call Function.
Here is a summary of the automatic variables. See section Automatic Variables, for full information.
$@
The file name of the target.
$%
The target member name, when the target is an archive member.
$<
The name of the first prerequisite.
$?
The names of all the prerequisites that are newer than the target, with spaces between them. For prerequisites which are archive members, only the member named is used (see section Using make to Update Archive Files).
$^
$+
The names of all the prerequisites, with spaces between them. For prerequisites which are archive members, only the member named is used (see section Using make to Update Archive Files). The value of $^ omits duplicate prerequisites, while $+ retains them and preserves their order.
$*
The stem with which an implicit rule matches (see section How Patterns Match).
$(@D)
$(@F)
The directory part and the file-within-directory part of $@.
$(*D)
$(*F)
The directory part and the file-within-directory part of $*.
$(%D)
$(%F)
The directory part and the file-within-directory part of $%.
$(<D)
$(<F)
The directory part and the file-within-directory part of $<.
$(^D)
$(^F)
The directory part and the file-within-directory part of $^.
$(+D)
$(+F)
The directory part and the file-within-directory part of $+.
$(?D)
$(?F)
The directory part and the file-within-directory part of $?.
These variables are used specially by GNU make:
MAKEFILES
Makefiles to be read on every invocation of make.
See section The Variable MAKEFILES.
VPATH
Directory search path for files not found in the current directory.
See section VPATH: Search Path for All Prerequisites.
SHELL
The name of the system default command interpreter, usually /bin/sh
. You can set SHELL in the makefile to change the shell used to run commands. See section Command Execution.
MAKESHELL
On MS-DOS only, the name of the command interpreter that is to be used by make. This value takes precedence over the value of SHELL. See section Command Execution.
MAKE
The name with which make was invoked. Using this variable in commands has special meaning. See section How the MAKE Variable Works.
MAKELEVEL
The number of levels of recursion (sub-makes).
See section Communicating Variables to a Sub-make.
MAKEFLAGS
The flags given to make. You can set this in the environment or a makefile to set flags.
See section Communicating Options to a Sub-make. It is never appropriate to use MAKEFLAGS directly on a command line: its contents may not be quoted correctly for use in the shell. Always allow recursive make’s to obtain these values through the environment from its parent.
MAKECMDGOALS
The targets given to make on the command line. Setting this variable has no effect on the operation of make.
See section Arguments to Specify the Goals.
CURDIR
Set to the pathname of the current working directory (after all -C options are processed, if any). Setting this variable has no effect on the operation of make.
See section Recursive Use of make.
SUFFIXES
The default list of suffixes before make reads any makefiles.
.LIBPATTERNS
Defines the naming of the libraries make searches for, and their order.
See section Directory Search for Link Libraries.
Errors Generated by Make
Here is a list of the more common errors you might see generated by make, and some information about what they mean and how to fix them.
Sometimes make errors are not fatal, especially in the presence of a - prefix on a command script line, or the -k command line option. Errors that are fatal are prefixed with the string ***.
Error messages are all either prefixed with the name of the program (usually make
), or, if the error is found in a makefile, the name of the file and linenumber containing the problem.
In the table below, these common prefixes are left off.
\[foo\] Error NN
\[foo\] signal description
These errors are not really make errors at all. They mean that a program that make invoked as part of a command script returned a non-0 error code (Error NN
), which make interprets as failure, or it exited in some other abnormal fashion (with a signal of some type). See section Errors in Commands. If no *** is attached to the message, then the subprocess failed but the rule in the makefile was prefixed with the - special character, so make ignored the error.
missing separator. Stop.
missing separator (did you mean TAB instead of 8 spaces?). Stop.
This means that make could not understand much of anything about the command line it just read. GNU make looks for various kinds of separators (:, =, TAB characters, etc.) to help it decide what kind of commandline it’s seeing. This means it couldn’t find a valid one. One of the most common reasons for this message is that you (or perhaps your oh-so-helpful editor, as is the case with many MS-Windows editors) have attempted to indent your command scripts with spaces instead of a TAB character. In this case, make will use the second form of the error above. Remember that every line in the command script must begin with a TAB character. Eight spaces do not count. See section Rule Syntax.
commands commence before first target. Stop.
missing rule before commands. Stop.
This means the first thing in the makefile seems to be part of a command script: it begins with a TAB character and doesn’t appear to be a legal make command (such as a variable assignment). Command scripts must always be associated with a target. The second form is generated if the line has a semicolon as the first non-whitespace character; make interprets this to mean you left out the “target: prerequisite” section of a rule. See section Rule Syntax.
No rule to make target `xxx.'
No rule to make target `xxx, needed by yyy.'
This means that make decided it needed to build a target, but then couldn’t find any instructions in the makefile on how to do that, either explicit or implicit (including in the default rules database). If you want that file to be built, you will need to add a rule to your makefile describing how that target can be built. Other possible sources of this problem are typos in the makefile (if that filename is wrong) or a corrupted source tree (if that file is not supposed to be built, but rather only a prerequisite).
No targets specified and no makefile found. Stop.
No targets. Stop.
The former means that you didn’t provide any targets to be built on the command line, and make couldn’t find any makefiles to read in. The latter means that some makefile was found, but it didn’t contain any default target and none was given on the command line. GNU make has nothing to do in these situations. See section Arguments to Specify the Makefile.
Makefile `xxx was not found.'
Included makefile `xxx was not found.'
A makefile specified on the command line (first form) or included (second form) was not found.
warning: overriding commands for target `xxx'
warning: ignoring old commands for target `xxx'
GNU make allows commands to be specified only once per target (except for double-colon rules). If you give commands for a target which already has been defined to have commands, this warning is issued and the second set of commands will overwrite the first set. See section Multiple Rules for One Target.
Circular xxx \<- yyy dependency dropped.
This means that make detected a loop in the dependency graph: after tracing the prerequisite yyy of target xxx, and its prerequisites, etc., one of them depended on xxx again.
Recursive variable `xxx references itself (eventually). Stop.'
This means you’ve defined a normal (recursive) make variable xxx that, when it’s expanded, will refer to itself (xxx). This is not allowed; either use simply-expanded variables (:=) or use the append operator (+=). See section How to Use Variables.
Unterminated variable reference. Stop.
This means you forgot to provide the proper closing parenthesis or brace in your variable or function reference.
insufficient arguments to function `xxx. Stop.'
This means you haven’t provided the requisite number of arguments for this function. See the documentation of the function for a description of its arguments. See section Functions for Transforming Text.
missing target pattern. Stop.
multiple target patterns. Stop.
target pattern contains no `%. Stop.'
These are generated for malformed static pattern rules. The first means there’s no pattern in the target section of the rule, the second means there are multiple patterns in the target section, and the third means the target doesn’t contain a pattern character (%). See section Syntax of Static Pattern Rules.
warning: -jN forced in submake: disabling jobserver mode.
This warning and the next are generated if make detects error conditions related to parallel processing on systems where sub-makes can communicate (see section Communicating Options to a Sub-make). This warning is generated if a recursive invocation of a make process is forced to have -jN
in its argument list (where N is greater than one). This could happen, for example, if you set the MAKE environment variable to make -j2
. In this case, the sub-make doesn’t communicate with other make processes and will simply pretend it has two jobs of its own.
warning: jobserver unavailable: using -j1. Add `+ to parent make rule.'
In order for make processes to communicate, the parent will pass information to the child. Since this could result in problems if the child process isn’t actually a make, the parent will only do this if it thinks the child is a make. The parent uses the normal algorithms to determine this (see section How the MAKE Variable Works). If the makefile is constructed such that the parent doesn’t know the child is a make process, then the child will receive only part of the information necessary. In this case, the child will generate this warning message and proceed with its build in a sequential manner.
Complex Makefile Example
Here is the makefile for the GNU tar program. This is a moderately complex makefile.
Because it is the first target, the default goal is all
. An interesting feature of this makefile is that testpad.h
is a source file automatically created by the testpad program, itself compiled from testpad.c
.
If you type make
or make all
, then make creates the tar
executable, the rmt
daemon that provides remote tape access, and the tar.info
Info file.
If you type make install
, then make not only creates tar
, rmt
, and tar.info
, but also installs them.
If you type make clean
, then make removes the .o
files, and the tar
, rmt
, testpad
, testpad.h
, and core
files.
If you type make distclean
, then make not only removes the same files as does make clean
but also the TAGS
, Makefile
, and config.status
files. (Although it is not evident, this makefile (and config.status
) is generated by the user with the configure program, which is provided in the tar distribution, but is not shown here.)
If you type make realclean
, then make removes the same files as does make distclean
and also removes the Info files generated from tar.texinfo
.
In addition, there are targets shar and dist that create distribution kits.
# Generated automatically from Makefile.in by configure.
# Un*x Makefile for GNU tar program.
# Copyright (C) 1991 Free Software Foundation, Inc.
# This program is free software; you can redistribute
# it and/or modify it under the terms of the GNU
# General Public License …
…
…
SHELL = /bin/sh
#### Start of system configuration section. ####
srcdir = .
# If you use gcc, you should either run the
# fixincludes script that comes with it or else use
# gcc with the -traditional option. Otherwise ioctl
# calls will be compiled incorrectly on some systems.
CC = gcc -O
YACC = bison -y
INSTALL = /usr/local/bin/install -c
INSTALLDATA = /usr/local/bin/install -c -m 644
# Things you might add to DEFS:
# -DSTDC_HEADERS If you have ANSI C headers and
# libraries.
# -DPOSIX If you have POSIX.1 headers and
# libraries.
# -DBSD42 If you have sys/dir.h (unless
# you use -DPOSIX), sys/file.h,
# and st_blocks in `struct stat’.
# -DUSG If you have System V/ANSI C
# string and memory functions
# and headers, sys/sysmacros.h,
# fcntl.h, getcwd, no valloc,
# and ndir.h (unless
# you use -DDIRENT).
# -DNO_MEMORY_H If USG or STDC_HEADERS but do not
# include memory.h.
# -DDIRENT If USG and you have dirent.h
# instead of ndir.h.
# -DSIGTYPE=int If your signal handlers
# return int, not void.
# -DNO_MTIO If you lack sys/mtio.h
# (magtape ioctls).
# -DNO_REMOTE If you do not have a remote shell
# or rexec.
# -DUSE_REXEC To use rexec for remote tape
# operations instead of
# forking rsh or remsh.
# -DVPRINTF_MISSING If you lack vprintf function
# (but have _doprnt).
# -DDOPRNT_MISSING If you lack _doprnt function.
# Also need to define
# -DVPRINTF_MISSING.
# -DFTIME_MISSING If you lack ftime system call.
# -DSTRSTR_MISSING If you lack strstr function.
# -DVALLOC_MISSING If you lack valloc function.
# -DMKDIR_MISSING If you lack mkdir and
# rmdir system calls.
# -DRENAME_MISSING If you lack rename system call.
# -DFTRUNCATE_MISSING If you lack ftruncate
# system call.
# -DV7 On Version 7 Unix (not
# tested in a long time).
# -DEMUL_OPEN3 If you lack a 3-argument version
# of open, and want to emulate it
# with system calls you do have.
# -DNO_OPEN3 If you lack the 3-argument open
# and want to disable the tar -k
# option instead of emulating open.
# -DXENIX If you have sys/inode.h
# and need it 94 to be included.
DEFS = -DSIGTYPE=int -DDIRENT -DSTRSTR_MISSING \
-DVPRINTF_MISSING -DBSD42
# Set this to rtapelib.o unless you defined NO_REMOTE,
# in which case make it empty.
RTAPELIB = rtapelib.o
LIBS =
DEF_AR_FILE = /dev/rmt8
DEFBLOCKING = 20
CDEBUG = -g
CFLAGS = $(CDEBUG) -I. -I$(srcdir) $(DEFS) \
-DDEF_AR_FILE=\"$(DEF_AR_FILE)\” \
-DDEFBLOCKING=$(DEFBLOCKING)
LDFLAGS = -g
prefix = /usr/local
# Prefix for each installed program,
# normally empty or `g’.
binprefix =
# The directory to install tar in.
bindir = $(prefix)/bin
# The directory to install the info files in.
infodir = $(prefix)/info
#### End of system configuration section. ####
SRC1 = tar.c create.c extract.c buffer.c \
getoldopt.c update.c gnu.c mangle.c
SRC2 = version.c list.c names.c diffarch.c \
port.c wildmat.c getopt.c
SRC3 = getopt1.c regex.c getdate.y
SRCS = $(SRC1) $(SRC2) $(SRC3)
OBJ1 = tar.o create.o extract.o buffer.o \
getoldopt.o update.o gnu.o mangle.o
OBJ2 = version.o list.o names.o diffarch.o \
port.o wildmat.o getopt.o
OBJ3 = getopt1.o regex.o getdate.o $(RTAPELIB)
OBJS = $(OBJ1) $(OBJ2) $(OBJ3)
AUX = README COPYING ChangeLog Makefile.in \
makefile.pc configure configure.in \
tar.texinfo tar.info* texinfo.tex \
tar.h port.h open3.h getopt.h regex.h \
rmt.h rmt.c rtapelib.c alloca.c \
msd_dir.h msd_dir.c tcexparg.c \
level-0 level-1 backup-specs testpad.c
all: tar rmt tar.info
tar: $(OBJS)
$(CC) $(LDFLAGS) -o $@ $(OBJS) $(LIBS)
rmt: rmt.c
$(CC) $(CFLAGS) $(LDFLAGS) -o $@ rmt.c
tar.info: tar.texinfo
makeinfo tar.texinfo
install: all
$(INSTALL) tar $(bindir)/$(binprefix)tar
-test ! -f rmt || $(INSTALL) rmt /etc/rmt
$(INSTALLDATA) $(srcdir)/tar.info* $(infodir)
$(OBJS): tar.h port.h testpad.h
regex.o buffer.o tar.o: regex.h
# getdate.y has 8 shift/reduce conflicts.
testpad.h: testpad
./testpad
testpad: testpad.o
$(CC) -o $@ testpad.o
TAGS: $(SRCS)
etags $(SRCS)
clean:
rm -f *.o tar rmt testpad testpad.h core
distclean: clean
rm -f TAGS Makefile config.status
realclean: distclean
rm -f tar.info*
shar: $(SRCS) $(AUX)
shar $(SRCS) $(AUX) | compress \
> tar-`sed -e ‘/version_string/!d’ \
-e ’s/[^0-9.]*\([0-9.]*\).*/\1/’ \
-e q
version.c`.shar.Z
dist: $(SRCS) $(AUX)
echo tar-`sed \
-e ‘/version_string/!d’ \
-e ’s/[^0-9.]*\([0-9.]*\).*/\1/’ \
-e q
version.c` > .fname
-rm -rf `cat .fname`
mkdir `cat .fname`
ln $(SRCS) $(AUX) `cat .fname`
tar chZf `cat .fname`.tar.Z `cat .fname`
-rm -rf `cat .fname` .fname
tar.zoo: $(SRCS) $(AUX)
-rm -rf tmp.dir
-mkdir tmp.dir
-rm tar.zoo
for X in $(SRCS) $(AUX) ; do \
echo $$X ; \
sed ’s/$$/^M/’ $$X \
> tmp.dir/$$X ; done
cd tmp.dir ; zoo aM ../tar.zoo *
-rm -rf tmp.dir
$
%
- %, in pattern rules
- %, quoting in static pattern
- %, quoting in patsubst
- %, quoting in vpath
- %, quoting with \ (backslash), %, quoting with \ (backslash), %, quoting with \ (backslash)
*
,
-
- - (in commands)
- -, and define
- --assume-new, --assume-new
- --assume-new, and recursion
- --assume-old, --assume-old
- --assume-old, and recursion
- --debug
- --directory, --directory
- --directory, and recursion
- --directory, and --print-directory
- --dry-run, --dry-run, --dry-run
- --environment-overrides
- --file, --file, --file
- --file, and recursion
- --help
- --ignore-errors, --ignore-errors
- --include-dir, --include-dir
- --jobs, --jobs
- --jobs, and recursion
- --just-print, --just-print, --just-print
- --keep-going, --keep-going, --keep-going
- --load-average, --load-average
- --makefile, --makefile, --makefile
- --max-load, --max-load
- --new-file, --new-file
- --new-file, and recursion
- --no-builtin-rules
- --no-builtin-variables
- --no-keep-going
- --no-print-directory, --no-print-directory
- --old-file, --old-file
- --old-file, and recursion
- --print-data-base
- --print-directory
- --print-directory, and recursion
- --print-directory, disabling
- --print-directory, and --directory
- --question, --question
- --quiet, --quiet
- --recon, --recon, --recon
- --silent, --silent
- --stop
- --touch, --touch
- --touch, and recursion
- --version
- --warn-undefined-variables
- --what-if, --what-if
- -b
- -C, -C
- -C, and recursion
- -C, and -w
- -d
- -e
- -e (shell flag)
- -f, -f, -f
- -f, and recursion
- -h
- -I, -I
- -i, -i
- -j, -j
- -j, and archive update
- -j, and recursion
- -k, -k, -k
- -l
- -l (library search)
- -l (load average)
- -m
- -M (to compiler)
- -MM (to GNU compiler)
- -n, -n, -n
- -o, -o
- -o, and recursion
- -p
- -q, -q
- -R
- -r
- -S
- -s, -s
- -t, -t
- -t, and recursion
- -v
- -W, -W
- -w
- -W, and recursion
- -w, and recursion
- -w, disabling
- -w, and -C
.
- .a (archives)
- .C
- .c
- .cc
- .ch
- .d
- .def
- .dvi
- .F
- .f
- .info
- .l
- .LIBPATTERNS, and link libraries
- .ln
- .mod
- .o, .o
- .p
- .PRECIOUS intermediate files
- .r
- .S
- .s
- .sh
- .sym
- .tex
- .texi
- .texinfo
- .txinfo
- .w
- .web
- .y
:
=
?
@
[
\
- \ (backslash), for continuation lines
- \ (backslash), in commands
- \ (backslash), to quote %, \ (backslash), to quote %, \ (backslash), to quote %
_
a
- algorithm for directory search
- all (standard target)
- appending to variables
- ar
- archive
- archive member targets
- archive symbol directory updating
- archive, and -j
- archive, and parallel execution
- archive, suffix rule for
- Arg list too long
- arguments of functions
- as, as
- assembly, rule to compile
- automatic generation of prerequisites, automatic generation of prerequisites
- automatic variables
b
- backquotes
- backslash (\), for continuation lines
- backslash (\), in commands
- backslash (\), to quote %, backslash (\), to quote %, backslash (\), to quote %
- backslashes in pathnames and wildcard expansion
- basename
- binary packages
- broken pipe
- bugs, reporting
- built-in special targets
c
- C++, rule to compile
- C, rule to compile
- cc, cc
- cd (shell command), cd (shell command)
- chains of rules
- check (standard target)
- clean (standard target)
- clean target, clean target
- cleaning up
- clobber (standard target)
- co, co
- combining rules by prerequisite
- command line variable definitions, and recursion
- command line variables
- commands
- commands, backslash (\) in
- commands, comments in
- commands, echoing
- commands, empty
- commands, errors in
- commands, execution
- commands, execution in parallel
- commands, expansion
- commands, how to write
- commands, instead of executing
- commands, introduction to
- commands, quoting newlines in
- commands, sequences of
- comments, in commands
- comments, in makefile
- compatibility
- compatibility in exporting
- compilation, testing
- computed variable name
- conditional expansion
- conditional variable assignment
- conditionals
- continuation lines
- controlling make
- conventions for makefiles
- ctangle, ctangle
- cweave, cweave
d
- data base of make rules
- deducing commands (implicit rules)
- default directries for included makefiles
- default goal, default goal
- default makefile name
- default rules, last-resort
- define, expansion
- defining variables verbatim
- deletion of target files, deletion of target files
- directive
- directories, printing them
- directories, updating archive symbol
- directory part
- directory search (VPATH)
- directory search (VPATH), and implicit rules
- directory search (VPATH), and link libraries
- directory search (VPATH), and shell commands
- directory search algorithm
- directory search, traditional
- dist (standard target)
- distclean (standard target)
- dollar sign ($), in function call
- dollar sign ($), in rules
- dollar sign ($), in variable name
- dollar sign ($), in variable reference
- double-colon rules
- duplicate words, removing
e
- E2BIG
- echoing of commands
- editor
- Emacs (M-x compile)
- empty commands
- empty targets
- environment
- environment, and recursion
- environment, SHELL in
- error, stopping on
- errors (in commands)
- errors with wildcards
- execution, in parallel
- execution, instead of
- execution, of commands
- exit status (errors)
- explicit rule, definition of
- explicit rule, expansion
- exporting variables
f
- f77, f77
- features of GNU make
- features, missing
- file name functions
- file name of makefile
- file name of makefile, how to specify
- file name prefix, adding
- file name suffix
- file name suffix, adding
- file name with wildcards
- file name, basename of
- file name, directory part
- file name, nondirectory part
- files, assuming new
- files, assuming old
- files, avoiding recompilation of
- files, intermediate
- filtering out words
- filtering words
- finding strings
- flags
- flags for compilers
- flavors of variables
- FORCE
- force targets
- Fortran, rule to compile
- functions
- functions, for controlling make
- functions, for file names
- functions, for text
- functions, syntax of
- functions, user defined
g
- g++, g++
- gcc
- generating prerequisites automatically, generating prerequisites automatically
- get, get
- globbing (wildcards)
- goal
- goal, default, goal, default
- goal, how to specify
h
i
- IEEE Standard 1003.2
- ifdef, expansion
- ifeq, expansion
- ifndef, expansion
- ifneq, expansion
- implicit rule
- implicit rule, and directory search
- implicit rule, and VPATH
- implicit rule, definition of
- implicit rule, expansion
- implicit rule, how to use
- implicit rule, introduction to
- implicit rule, predefined
- implicit rule, search algorithm
- included makefiles, default directries
- including (MAKEFILES variable)
- including other makefiles
- incompatibilities
- Info, rule to format
- install (standard target)
- intermediate files
- intermediate files, preserving
- intermediate targets, explicit
- interrupt
j
k
l
- last-resort default rules
- ld
- lex, lex
- Lex, rule to run
- libraries for linking, directory search
- library archive, suffix rule for
- limiting jobs based on load
- link libraries, and directory search
- link libraries, patterns matching
- linking, predefined rule for
- lint
- lint, rule to run
- list of all prerequisites
- list of changed prerequisites
- load average
- loops in variable expansion
- lpr (shell command), lpr (shell command)
m
- m2c
- macro
- make depend
- MAKECMDGOALS
- makefile
- makefile name
- makefile name, how to specify
- makefile rule parts
- makefile, and MAKEFILES variable
- makefile, conventions for
- makefile, how make processes
- makefile, how to write
- makefile, including
- makefile, overriding
- makefile, parsing
- makefile, remaking of
- makefile, simple
- makeinfo, makeinfo
- match-anything rule
- match-anything rule, used to override
- missing features
- mistakes with wildcards
- modified variable reference
- Modula-2, rule to compile
- mostlyclean (standard target)
- multiple rules for one target
- multiple rules for one target (::)
- multiple targets
- multiple targets, in pattern rule
n
- name of makefile
- name of makefile, how to specify
- nested variable reference
- newline, quoting, in commands
- newline, quoting, in makefile
- nondirectory part
o
- OBJ
- obj
- OBJECTS
- objects
- OBJS
- objs
- old-fashioned suffix rules
- options
- options, and recursion
- options, setting from environment
- options, setting in makefiles
- order of pattern rules
- origin of variable
- overriding makefiles
- overriding variables with arguments
- overriding with override
p
- parallel execution
- parallel execution, and archive update
- parallel execution, overriding
- parts of makefile rule
- Pascal, rule to compile
- pattern rule
- pattern rule, expansion
- pattern rules, order of
- pattern rules, static (not implicit)
- pattern rules, static, syntax of
- pattern-specific variables
- pc, pc
- phony targets
- pitfalls of wildcards
- portability
- POSIX
- POSIX.2
- post-installation commands
- pre-installation commands
- precious targets
- predefined rules and variables, printing
- prefix, adding
- prerequisite
- prerequisite pattern, implicit
- prerequisite pattern, static (not implicit)
- prerequisite, expansion
- prerequisites
- prerequisites, automatic generation, prerequisites, automatic generation
- prerequisites, introduction to
- prerequisites, list of all
- prerequisites, list of changed
- prerequisites, varying (static pattern)
- preserving intermediate files
- preserving with .PRECIOUS, preserving with .PRECIOUS
- preserving with .SECONDARY
- print (standard target)
- print target, print target
- printing directories
- printing of commands
- printing user warnings
- problems and bugs, reporting
- problems with wildcards
- processing a makefile
q
- question mode
- quoting %, in static pattern
- quoting %, in patsubst
- quoting %, in vpath
- quoting newline, in commands
- quoting newline, in makefile
r
- Ratfor, rule to compile
- RCS, rule to extract from
- reading makefiles
- README
- realclean (standard target)
- recompilation
- recompilation, avoiding
- recording events with empty targets
- recursion
- recursion, and -C
- recursion, and -f
- recursion, and -j
- recursion, and -o
- recursion, and -t
- recursion, and -W
- recursion, and -w
- recursion, and command line variable definitions
- recursion, and environment
- recursion, and MAKE variable
- recursion, and MAKEFILES variable
- recursion, and options
- recursion, and printing directories
- recursion, and variables
- recursion, level of
- recursive variable expansion, recursive variable expansion
- recursively expanded variables
- reference to variables, reference to variables
- relinking
- remaking makefiles
- removal of target files, removal of target files
- removing duplicate words
- removing targets on failure
- removing, to clean up
- reporting bugs
- rm
- rm (shell command), rm (shell command), rm (shell command), rm (shell command)
- rule commands
- rule prerequisites
- rule syntax
- rule targets
- rule, and $
- rule, double-colon (::)
- rule, explicit, definition of
- rule, how to write
- rule, implicit
- rule, implicit, and directory search
- rule, implicit, and VPATH
- rule, implicit, chains of
- rule, implicit, definition of
- rule, implicit, how to use
- rule, implicit, introduction to
- rule, implicit, predefined
- rule, introduction to
- rule, multiple for one target
- rule, no commands or prerequisites
- rule, pattern
- rule, static pattern
- rule, static pattern versus implicit
- rule, with multiple targets
s
- s. (SCCS file prefix)
- SCCS, rule to extract from
- search algorithm, implicit rule
- search path for prerequisites (VPATH)
- search path for prerequisites (VPATH), and implicit rules
- search path for prerequisites (VPATH), and link libraries
- searching for strings
- secondary files
- secondary targets
- sed (shell command)
- selecting a word
- selecting word lists
- sequences of commands
- setting options from environment
- setting options in makefiles
- setting variables
- several rules for one target
- several targets in a rule
- shar (standard target)
- shell command
- shell command, and directory search
- shell command, execution
- shell command, function for
- shell file name pattern (in include)
- shell wildcards (in include)
- SHELL, MS-DOS specifics
- signal
- silent operation
- simple makefile
- simple variable expansion
- simplifying with variables
- simply expanded variables
- sorting words
- spaces, in variable values
- spaces, stripping
- special targets
- specifying makefile name
- standard input
- standards conformance
- standards for makefiles
- static pattern rule
- static pattern rule, syntax of
- static pattern rule, versus implicit
- stem, stem
- stem, variable for
- stopping make
- strings, searching for
- stripping whitespace
- sub-make
- subdirectories, recursion for
- substitution variable reference
- suffix rule
- suffix rule, for archive
- suffix, adding
- suffix, function to find
- suffix, substituting in variables
- switches
- symbol directories, updating archive
- syntax of rules
t
- tab character (in commands)
- tabs in rules
- TAGS (standard target)
- tangle, tangle
- tar (standard target)
- target
- target pattern, implicit
- target pattern, static (not implicit)
- target, deleting on error
- target, deleting on interrupt
- target, expansion
- target, multiple in pattern rule
- target, multiple rules for one
- target, touching
- target-specific variables
- targets
- targets without a file
- targets, built-in special
- targets, empty
- targets, force
- targets, introduction to
- targets, multiple
- targets, phony
- terminal rule
- test (standard target)
- testing compilation
- tex, tex
- TeX, rule to run
- texi2dvi, texi2dvi
- Texinfo, rule to format
- tilde (~)
- touch (shell command), touch (shell command)
- touching files
- traditional directory search
u
- undefined variables, warning message
- updating archive symbol directories
- updating makefiles
- user defined functions
v
- value
- value, how a variable gets it
- variable
- variable definition
- variables
- variables,
$
in name - variables, and implicit rule
- variables, appending to
- variables, automatic
- variables, command line
- variables, command line, and recursion
- variables, computed names
- variables, conditional assignment
- variables, defining verbatim
- variables, environment, variables, environment
- variables, exporting
- variables, flavors
- variables, how they get their values
- variables, how to reference
- variables, loops in expansion
- variables, modified reference
- variables, nested references
- variables, origin of
- variables, overriding
- variables, overriding with arguments
- variables, pattern-specific
- variables, recursively expanded
- variables, setting
- variables, simply expanded
- variables, spaces in values
- variables, substituting suffix in
- variables, substitution reference
- variables, target-specific
- variables, warning for undefined
- varying prerequisites
- verbatim variable definition
- vpath
- VPATH, and implicit rules
- VPATH, and link libraries
w
- warnings, printing
- weave, weave
- Web, rule to run
- what if
- whitespace, in variable values
- whitespace, stripping
- wildcard
- wildcard pitfalls
- wildcard, function
- wildcard, in archive member
- wildcard, in include
- wildcards and MS-DOS/MS-Windows backslashes
- word, selecting a
- words, extracting first
- words, filtering
- words, filtering out
- words, finding number
- words, iterating over
- words, joining lists
- words, removing duplicates
- words, selecting lists of
- writing rule commands
- writing rules
y
~
Index of Functions, Variables, & Directives
$
- $%
- $(%D)
- $(%F)
- $(*D)
- $(*F)
- $(<D)
- $(<F)
- $(?D)
- $(?F)
- $(@D)
- $(@F)
- $(^D)
- $(^F)
- $*
- $*, and static pattern
- $+
- $<
- $?
- $@
- $^
%
*
- * (automatic variable)
- * (automatic variable), unsupported bizarre usage
- *D (automatic variable)
- *F (automatic variable)
.
- .DEFAULT, .DEFAULT
- .DEFAULT, and empty commands
- .DELETE_ON_ERROR, .DELETE_ON_ERROR
- .EXPORT_ALL_VARIABLES, .EXPORT_ALL_VARIABLES
- .IGNORE, .IGNORE
- .INTERMEDIATE
- .LIBPATTERNS
- .NOTPARALLEL
- .PHONY, .PHONY
- .POSIX
- .PRECIOUS, .PRECIOUS
- .SECONDARY
- .SILENT, .SILENT
- .SUFFIXES, .SUFFIXES
/
<
?
@
^
a
b
c
d
e
f
g
i
j
l
m
- MAKE, MAKE
- MAKECMDGOALS
- Makefile
- makefile
- MAKEFILES, MAKEFILES
- MAKEFLAGS
- MAKEINFO
- MAKELEVEL, MAKELEVEL
- MAKEOVERRIDES
- MFLAGS
n
o
p
r
s
t
u
v
w
y
Footnotes
(1)
GNU Make compiled for MS-DOS and MS-Windows behaves as if prefix has been defined to be the root of the DJGPP tree hierarchy.
(2)
On MS-DOS, the value of current working directory is global, so changing it will affect the following command lines on those systems.
(3)
texi2dvi uses TeX to do the real work of formatting. TeX is not distributed with Texinfo.
|
common_crawl_ocw.mit.edu_53
|
Topics
1. Introduction
Graphical programs require a very different programming model to the non-graphical programs we have encountered in the past. A non-graphical program typically runs straight through from beginning to end. By contrast, a graphical program should be capable of running indefinitely, accepting input through the graphical user interface (GUI) and responding accordingly. This kind of programming is known as event-driven programming, because the program’s sequence of operation is determined by events generated by the GUI components. The program responds to events by invoking functions known as event handlers . For example, pushing the Print button may generate a “button-pushed” event, which results in a call to an event handler named print().
In general, a graphical program consists of the following key elements:
- Code to create GUI components, such as buttons, text areas, scrollable views, etc.
- Code that lays out the components within a container. Examples of containers are frames, which are stand-alone windows, and applets, which are windows that are embedded within a web page.
- Event handling code that specifies what should happen when the user interacts with the GUI components.
- An event loop, whose job is to wait for events to occur and to call appropriate event handlers.
The following pseudo-code illustrates how the event loop might work
while (true) { // The event loop.
// Get the next event from the event queue.
Event e = get_next_event();
// Process the events by calling appropriate event handlers.
if (e.eventType == QUIT) {
exit(); // Terminate the program.
}
else if (e.eventType == BUTTON_PUSHED) {
if (e.eventSource == PRINT_BUTTON)
print(e); // Print out the current page.
else {
…
}
}
else {
…
}
}
In C++, the programmer must often explicitly write an event loop similar to the one shown above. This can involve a lot of work, so Java® attempts to shield the programmer from the actual event loop, while still providing a flexible way to specify how events are processed.
2. The Java® Event Model (JDK 1.1 and above)
(Ref. Java® Tutorial)
The Java® event model is based on the notion of event sources and event listeners.
An event source is most frequently a user interface component (such as a button, menu item or scrollable view), which can notify registered listeners when events of interest occur. Note that an event source may generate both high level events e.g. button click, as well as low level events, e.g. mouse press.
An event listener is an object that can register an interest in receiving certain types of events from an event source. The event source sends out event notifications by calling an appropriate event handling method in the event listener object.
The event listener registration and notification process takes place according to event type . An object wishing to listen to events of a particular type must implement the corresponding event listener interface . The interface simply specifies a standard set of event handling functions that the listener object must provide.
Here is a list of events, and their corresponding event types and event listener interfaces.
The general approach to implementing an event listener is the same in every case.
- Write a class that implements the appropriate XXXListener interface.
- Create an object of type XXXListener.
- Register the event listener object with an event source by calling the event source’s addXXXListener method.
The following example shows how to create a frame. When the frame is closed, we want to make sure that the program terminates, since this does not happen automatically. We can use a WindowListener to do this.
import javax.swing.*;
import java.awt.event.*;
public class Main {
public static void main(String[] args) {
// Create a window. Then set its size and make it visible.
JFrame frame = new JFrame(“Main window”);
frame.setSize(400,400);
frame.setVisible(true);
// Make the program terminate when the frame is closed. We do this by registering a window listener
// to receive WindowEvents from the frame. The window listener will provide an event handler called
// windowClosing, which will be called when the frame is closed.
WindowListener listener = new MyWindowListener(); // A class that we write.
frame.addWindowListener(listener);
}
}
// Here is our window listener. We are only interested in windowClosing, however, we must provide
// implementations for all of the methods in the WindowListener interface.
class MyWindowListener implements WindowListener {
public void windowClosing(WindowEvent e) {
System.out.println(“Terminating the program now.”);
System.exit(0);
}
public void windowClosed(WindowEvent e) {}
public void windowOpened(WindowEvent e) {}
public void windowActivated(WindowEvent e) {}
public void windowDeactivated(WindowEvent e) {}
public void windowIconified(WindowEvent e) {}
public void windowDeiconified(WindowEvent e) {}
}
Unfortunately, this example involves quite a lot of code. There are a couple of ways to simplify the program
Anonymous Classes
An anonymous class is a class that has no name. It is declared an instantiated within a single expression. Here is how we could use an anonymous class to simplify the closable frame example:
import javax.swing.*;
import java.awt.event.*;
public class Main {
public static void main(String[] args) {
// Create a window. Then set its size and make it visible.
JFrame frame = new JFrame(“Main window”);
frame.setSize(400,400);
frame.setVisible(true);
// Make the frame closable. Here we have used an anonymous class that implements the
// WindowListener interface.
frame.addWindowListener(new WindowListener() {
public void windowClosing(WindowEvent e) {
System.out.println(“Terminating the program now.”);
System.exit(0);
}
public void windowClosed(WindowEvent e) {}
public void windowOpened(WindowEvent e) {}
public void windowActivated(WindowEvent e) {}
public void windowDeactivated(WindowEvent e) {}
public void windowIconified(WindowEvent e) {}
public void windowDeiconified(WindowEvent e) {}
});
}
}
Event Adapters
An event adapter is just a class that implements an event listener interface, with empty definitions for all of the functions. The idea is that if we subclass the event adapter, we will only have to override the functions that we are interested in. The closable frame example can thus be shortened to:
import javax.swing.*;
import java.awt.event.*;
public class Main {
public static void main(String[] args) {
// Create a window. Then set its size and make it visible.
JFrame frame = new JFrame(“Main window”);
frame.setSize(400,400);
frame.setVisible(true);
// Make the frame closable. Here we have used an anonymous class that extends WindowAdapter.
frame.addWindowListener(new WindowAdapter() {
public void windowClosing(WindowEvent e) { // This overrides the empty base class method.
System.out.println(“Terminating the program now.”);
System.exit(0);
}
});
}
}
3. Laying Out User Interface Components
Containers
(Ref. Java® Tutorial)
A Container is a GUI component that can hold other GUI components. Three commonly used container classes are
JFrame - This is a stand-alone window with a title bar, menubar and a border. It is typically used as the top-level container for a graphical Java® application.
JApplet - This is a container that can be embedded within an HTML page. It is typically used as the top-level container for a Java® applet.
JPanel - This is a container that must reside within another container. It provides a way to group several components (e.g. buttons) as a single unit, when they are laid out on the screen. JPanel can also be used as an area for drawing operations. (When used in this way, it can provide automatic double buffering, which is a technique for producing flicker-free animation.)
A component object, myComponent, can be added to a container object, myContainer, using a statement of the form
myContainer.getContentPane().add(myComponent);
The following example illustrates how to add a JButton instance to an instance of JFrame.
import javax.swing.*;
import java.awt.event.*;
public class Main {
public static void main(String[] args) {
// Create a window.
JFrame frame = new JFrame(“Main window”);
frame.setSize(400,400);
// Create a button and add it to the frame.
JButton button = new JButton(“Click me”);
frame.getContentPane().add(button);
// Add an event handler for button clicks.
button.addActionListener(new ActionListener() {
public void actionPerformed(ActionEvent e) { // Only one method to implement.
System.out.println(e.getActionCommand()); // Prints out “Click me”.
}
});
// Make the frame closable.
frame.addWindowListener(new WindowAdapter() {
public void windowClosing(WindowEvent e) {
System.exit(0);
}
});
// Make the frame visible after adding the button.
frame.setVisible(true);
}
}
Layout Managers
(Ref. Java® Tutorial)
Our previous example has only one interesting GUI component: a JButton . What if we wanted to add a second JButton and perhaps a JTextArea, so that we can display the message through the GUI? We can control the layout of these components within the container by using a layout manager. Java® comes with six layout managers (five in java.awt and one in javax.swing)
FlowLayout - Lays out components in a line from left to right, moving to the next line when out of room. This layout style resembles the flow of text in a document.
BorderLayout - Lays out components in one of five positions - at the North, South, East or West borders, or else in the Center.
GridLayout - Lays out components in rows and columns of equal sized cells, like a spreadsheet.
GridBagLayout - Lays out components on a grid without requiring them to be of equal size. This is the most flexible and also the most complex of all the layout managers.
CardLayout - Lays out components like index cards, one behind another. (No longer useful, now that Swing provides a JTabbedPane component.)
BoxLayout - Lays out components with either vertical alignment or horizontal alignment. (A new layout manager in Swing.)
It is also possible to set a null layout manager and instead position components by specifying their absolute coordinates using the method
public void setLocation(int x, int y)
Suppose we wish to position our two JButtons side by side, with the JTextArea positioned below them. We start by embedding the JButtons within a JPanel, using FlowLayout as the layout manager for the JPanel. The JTextArea is best placed within a JScrollPane , since this will permit scrolling when the amount of text exceeds the preferred size of the scroll pane. We can now attach the JPanel and the JScrollPane to the North and South borders of the JFrame, by using BorderLayout as the layout manager for the JFrame. These containment relationships are illustrated below:
Here is the implementation:
import javax.swing.*;
import java.awt.*;
import java.awt.event.*;
public class Main {
public static void main(String[] args) {
// Create a window and set its layout manager to be BorderLayout.
// (This happens to be the default layout manager for a JFrame.)
JFrame frame = new JFrame(“Main window”);
frame.setSize(400,400);
Container cf = frame.getContentPane();
cf.setLayout(new BorderLayout());
// Create a panel and set its layout manager to be FlowLayout.
// (This happens to be the default layout manager for a JPanel.)
JPanel panel = new JPanel();
panel.setLayout(new FlowLayout()); // No content pane for JPanel.
// Create two buttons and add them to the panel.
JButton button1 = new JButton(“Left”);
JButton button2 = new JButton(“Right”);
panel.add(button1);
panel.add(button2);
// Create a text area for displaying messages. We embed the text
// area in a scroll pane so that it doesn’t grow unboundedly.
JTextArea textArea = new JTextArea();
JScrollPane scrollPane = new JScrollPane(textArea);
scrollPane.setPreferredSize(new Dimension(400, 100));
textArea.setEditable(false);
// Position the panel and the text area within the frame.
cf.add(panel, “North”);
cf.add(scrollPane, “South”);
// Add event handlers for button clicks.
class MyListener implements ActionListener { // A local class.
private JTextArea mTextArea;
public void setTextArea(JTextArea t) {
mTextArea = t;
}
public void actionPerformed(ActionEvent e) {
mTextArea.append(e.getActionCommand()+"\n");
}
}
MyListener listener = new MyListener();
listener.setTextArea(textArea); // Cannot do this with an anonymous class.
button1.addActionListener(listener);
button2.addActionListener(listener);
// Make the frame closable.
frame.addWindowListener(new WindowAdapter() {
public void windowClosing(WindowEvent e) {
System.exit(0);
}
});
// Make the frame visible after adding the components to it.
frame.setVisible(true);
}
}
4. Swing Component Overview
The components that we have seen so far are JFrame, JPanel, JButton, JTextArea and JScrollPane . The links below provide a good overview of the Swing components and how to use them.
|
common_crawl_ocw.mit.edu_54
|
Inheritance, Polymorphism and Virtual Functions
(Ref. Lippman 2.4, 17.1-17.6)
Inheritance allows us to specialize the behavior of a class. For example, we might write a Shape class, which provides basic functionality for managing 2D shapes. Our Shape class has member variables for the centroid and area. We may then specialize the Shape class to provide functionality for a particular shape, such as a circle. To do this, we write a class called Circle, which inherits the properties and methods of Shape.
class Circle : public Shape {
…
};
The Circle class adds a new member variable for the radius. Now, when we create a Circle object, it will have centroid and area variables (the Shape part) in addition to the radius variable (the Circle part). The Circle object can also call methods associated with class Shape, such as get_Centroid(). We refer to the Shape class as the base class and we refer to the Circle class as the derived class.
Members of class Shape that are private (e.g. mCentroid) cannot be directly accessed from within the Circle class definition. However, they can be accessed indirectly through the Shape class’s public interface (e.g. get_Centroid()). If we wish to allow class Circle to directly access members of class Shape, those members should be made protected (e.g. mfArea). To the outside world, i.e. in main(), protected members behave in exactly the same way as private members.
It is possible to use a base class pointer to address a derived class object, e.g.
Circle *pc
Shape *ps;
pc = new Circle();
ps = pc;
This feature is known as polymorphism. We can use the Shape pointer to access those methods of the Circle object that are inherited from Shape. e.g.
ps->get_Centroid()
The Circle class can also override functions that it inherits from the Shape class, as in the case of print(). To make this work, we must declare print() as a virtual function in class Shape. Then, when we use the Shape pointer to access the print() function, as in
ps->print();
we will invoke the print() function in the underlying Circle object. In the example below, we have used an array of Shape pointers, sa, to store a heterogeneous collection of Circle and Rectangle objects. In the code fragment
for (i = 0; i < num_shapes; i++) {
sa[i]->print(); // This will call either Circle::print() or Rectangle::print(), as appropriate.
}
the decision to call the print() function in class Circle or the one in class Rectangle must be made at run-time. The mechanism by which virtual function calls are resolved is known as dynamic binding.
The implementation of the print() function in class Shape serves as a default implementation, which will be used if the derived class chooses not to provide an overiding implementation. It is possible, however, for the Shape class to require all derived classes to provide an overriding implementation, as in the case of draw(). The draw() function is known as a pure virtual function, because it does not have an implementation in class Shape. Pure virtual functions have a declaration of the form
virtual void draw() = 0;
Since we have not implemented draw() in class Shape, the class is incomplete and we cannot actually create Shape objects. The Shape class is therefore said to be an abstract base class.
We must take care when deleting the objects stored in the array of Shape pointers. In the code fragment
for (i = 0; i < num_shapes; i++)
delete sa[i]; // This will call either Circle::~Circle() or Rectangle::~Rectangle(), as appropriate,
// before calling Shape::~Shape().
we have called delete on sa[i], which is a Shape pointer, even though the object that it points to is really a Circle or a Rectangle. To ensure that the appropriate Circle or Rectangle destructor is called, we must make the Shape destructor a virtual destructor.
shape.h
#ifndef _SHAPE_H_
#define _SHAPE_H_
#include <iostream.h>
#include “point.h”
#ifndef DEBUG_PRINT
#ifdef _DEBUG
#define DEBUG_PRINT(str) cout << str << endl;
#else
#define DEBUG_PRINT(str)
#endif
#endif
class Shape {
// The private members of the Shape class are only accessible within
// the definition of class Shape. They are not accessible within
// the definitions of classes derived from the Shape class, e.g. Circle,
// or within main().
private:
Point mCentroid;
// The protected members of the Shape class are accessible within the
// definition of class Shape. They are also accessible within the
// definitions of classes derived immediately from the Shape class, e.g.
// Circle. However, they are not accessible within main().
protected:
float mfArea;
// The public members of the Shape class are accessible everywhere i.e. in
// the Shape class definition, in derived class definitions and in main().
public:
Shape(float fX, float fY);
virtual ~Shape(); // A virtual destructor.
virtual void print(); // A virtual function.
virtual void draw() = 0; // A pure virtual function.
const Point& get_centroid() {
return mCentroid;
}
};
#endif
shape.C
#include “shape.h”
Shape::Shape(float fX, float fY) : mCentroid(fX, fY) {
// We must use an initialization list to initialize mCentroid.
// Here in the body of the constructor would be too late.
DEBUG_PRINT(“In constructor Shape::Shape(float, float)”)
}
Shape::~Shape() {
DEBUG_PRINT(“In destructor Shape::~Shape()”)
}
void Shape::print() {
DEBUG_PRINT(“In Shape::print()”)
cout << “Centroid: “;
mCentroid.print();
cout << “Area = " << mfArea << endl;
}
circle.h
#ifndef _CIRCLE_H_
#define _CIRCLE_H_
#include “shape.h”
class Circle : public Shape {
private:
float mfRadius;
public:
Circle(float fX=0, float fY=0, float fRadius=0);
~Circle();
void print();
void draw();
};
#endif
circle.C
#include “circle.h”
#define PI 3.1415926536
Circle::Circle(float fX, float fY, float fRadius) : Shape(fX, fY) {
// We must use an initialization list to initialize the Shape part of the Circle object.
DEBUG_PRINT(“In constructor Circle::Circle(float, float, float)”)
mfRadius = fRadius;
mfArea = PI * fRadius * fRadius; // mfArea is a protected member of class Shape.
}
Circle::~Circle() {
DEBUG_PRINT(“In destructor Circle::~Circle()”)
}
void Circle::print() {
DEBUG_PRINT(“In Circle::print()”)
cout << “Circle Radius: " << mfRadius << endl;
// If we want to print out the Shape part of the Circle object as well,
// we could call the base class print function like this:
Shape::print();
}
void Circle::draw() {
// Assume that this draws the circle.
DEBUG_PRINT(“In Circle::draw()”)
}
rectangle.h
#ifndef _RECTANGLE_H_
#define _RECTANGLE_H_
#include “shape.h”
class Rectangle : public Shape {
private:
float mfWidth, mfHeight;
public:
Rectangle(float fX=0, float fY=0, float fWidth=1, float fHeight=1);
~Rectangle();
void print();
void draw();
};
#endif
rectangle.C
#include “rectangle.h”
Rectangle::Rectangle(float fX, float fY, float fWidth, float fHeight) : Shape(fX, fY) {
// We must use an initialization list to initialize the Shape part of the Rectangle object.
DEBUG_PRINT(“In constructor Rectangle::Rectangle(float, float, float, float)”)
mfWidth = fWidth;
mfHeight = fHeight;
mfArea = fWidth * fHeight; // mfArea is a protected member of class Shape.
}
Rectangle::~Rectangle() {
DEBUG_PRINT(“In destructor Rectangle::~Rectangle()”)
}
void Rectangle::print() {
DEBUG_PRINT(“In Rectangle::print()”)
cout << “Rectangle Width: " << mfWidth << " Height: " << mfHeight << endl;
// If we want to print out the Shape part of the Rectangle object as well,
// we could call the base class print function like this:
Shape::print();
}
void Rectangle::draw() {
// Assume that this draws the rectangle.
DEBUG_PRINT(“In Rectangle::draw()”)
}
myprog.C
#include “shape.h”
#include “circle.h”
#include “rectangle.h”
int main() {
const int num_shapes = 5;
int i;
// Create an automatic Circle object.
Circle c;
// We cannot instantiate a Shape object because the Shape class has a pure virtual function
// i.e. a virtual function without a definition within class Shape. Class Shape is therefore said
// to be an abstract base class.
// Shape s; // This is not allowed.
// We are allowed to have Shape pointers, however.
Shape *sa[num_shapes]; // Create an array of Shape pointers.
// C++ allows us to use a base class pointer to point to a derived class object. This is known
// as polymorphism. We can thus store a heterogeneous collection of Circles and Rectangles
// using the array of Shape pointers.
sa[0] = new Circle(2,3,1);
sa[1] = new Rectangle(0,2,2,3);
sa[2] = new Circle(7,6,3);
sa[3] = new Circle(0,2,2);
sa[4] = new Rectangle(4,3,1,1);
// Print out all of the objects. We have made the print function virtual
// in class shape. This means that it can be overridden by print functions
// with a similar signature that are specific to the derived classes. If
// a derived class does not provide an implementation of print, then the
// Shape::print function will be called by default.
for (i = 0; i < num_shapes; i++) {
sa[i]->print(); // This will call either Circle::print() or Rectangle::print(), as appropriate.
}
// Delete the objects. Note that we have called delete on Shape pointers,
// even though the objects that we created using new were derived class
// objects. To ensure that the appropriate destructor for the derived
// object is called, we must make the Shape destructor virtual.
for (i = 0; i < num_shapes; i++)
delete sa[i]; // This will call either Circle::~Circle() or Rectangle::~Rectangle(), as appropriate,
// before calling Shape::~Shape().
return 0;
}
|
common_crawl_ocw.mit.edu_55
|
This lecture is courtesy of Petros Komodromos.
Topics
- Introduction to Java® 3D
- Java® 3D References
- Examples and Applications
- Scene Graph Structure and basic Java® 3D concepts and classes
- A simple Java® 3D program
- Performance of Java® 3D
{{< anchor “1” >}}{{< /anchor >}}1. Introduction to Java® 3D
Java® 3D is a general-purpose, platform-independent, object-oriented API for 3D-graphics that enables high-level development of Java® applications and applets with 3D interactive rendering capabilities. With Java® 3D, 3D scenes can be built programmatically, or, alternatively, 3D content can be loaded from VRML or other external files. Java® 3D, as a part of the Java® Media APIs, integrates well with the other Java® technologies and APIs. For example, Java® 2D API can be used to plot selected results, while the Java® Media Framework (JMF) API can be used to capture and stream audio and video.
Java® 3D is based on a directed acyclic graph-based scene structure, known as scene graph, that is used for representing and rendering the scene. The scene structure is a tree-like diagram that contains nodes with all the necessary information to create and render the scene. In particular, the scene graph contains the nodes that are used to represent and transform all objects in the scene, and all viewing control parameters, i.e. all objects with information related to the viewing of the scene. The scene graph can be manipulated very easily and quickly allowing efficient rendering by following a certain optimal order and bypassing hidden parts of objects in the scene.
Java® 3D API has been developed under a joint collaboration between Intel, Silicon Graphics, Apple, and Sun, combining the related knowledge of these companies. It has been designed to be a platform-independent API concerning the host’s operating system (PC/Solaris/Irix/HPUX/Linux) and graphics (OpenGL/Direct3D) platform, as well as the input and output (display) devices. The implementation of Java® 3D is built on top of OpenGL, or Direct3D. The high level Java® 3D API allows rapid application development which is very critical, especially nowadays.
However, Java® 3D has some weaknesses such as the performance, which is inferior to that of OpenGL, and the limited access to the rendering pipeline details. It is also still under development and several bugs need to be fixed. Although Java® 3D cannot achieve peak performance, portability and rapid development advantages may overweigh the slight performance penalty for many applications.
The current version of the Java® 3D API is the Version 1.2, which works together with the Java® 2 Platform. Both APIs can be downloaded for free from the Java® products page of Sun.
{{< anchor “2” >}}{{< /anchor >}}2. Java® 3D References
The following list includes many links related to the Java® 3D API
-
Java® 3D Tutorial (PDF format):
-
A Fourth generation Java® 3D graphics API
-
Java® 3D FAQ at Sun
-
Extensive Java® 3D FAQ at Sun
-
Java® 3D Group at NCSA
-
The Java® 3D and VRML Working group
-
VRML and Java® 3D Information Center
-
Web 3D consortium
Java® 3D is specified in the packages: javax.media.j3d and javax.vecmath. Supporting classes and utilities are provided in the com.sun.j3d packages.
{{< anchor “3” >}}{{< /anchor >}}3. Examples and Applications
The following are examples provided by Java® 3D in directories with the names as follows under the subdirectory java3d of the directory demo.
- AlternateAppearance
- Appearance
- AppearanceMixed
- AWT_Interaction: java AWTInteraction
- Background
- Billboard
- ConicWorld: java SimpleCylinder ; java TexturedSphere
- FourByFour: appletviewer fbf.html
- GearTest: java GearBox
- GeometryByReference
- GeometryCompression
- HelloUniverse
- Lightwave
- LOD
- ModelClip
- Morphing
- ObjLoad
- OffScreenCanvas3D
- OrientedShape3D
- PackageInfo
- PickTest
- PickText3D: java PickText3DGeometry
- PlatformGeometry
- PureImmediate
- ReadRaster
- Sound
- SphereMotion: appletviewer SphereMotion.html
- SplineAnim
- Text2D
- Text3D
- TextureByReference
- TextureTest
- TickTockCollision: java TickTockCollision
- TickTockPicking
- VirtualInputDevice
For example, on a Sun Ultra 10 workstation the files for the GearTest example are located under the subdirectory:_
mit/java_v1.2ref/distrib/sun4x_56/demo/java3d/GearTest_
Similarly, if you download Java® 3D on your computer, the examples are typically stored in subdirectories in the subdirectory demo\java3d of the directory where Java® has been downloaded, e.g. at C:\Java\jdk1.3\demo\java3d.
There are many fields in which Java® 3D can be used. The following are just a small selection of Java® 3D applications that are available on the net.
- NCSA Astro3D
- Collaborative Visualization Space Science and Engineering Center (SSEC)
- Java® 3D API Customer Success Stories
{{< anchor “4” >}}{{< /anchor >}}4. Scene Graph Structure and Basic Java® 3D Concepts and Classes
Scene graph: Content-View Branches
A Java® 3D scene is created as a tree-like graph structure, which is traversed during rendering. The scene graph structure contains nodes that represent either the actual objects of the scene, or, specifications that describe how to view the objects. Usually, there are two branches in Java® 3D, the content branch, which contains the nodes that describe the actual objects in the scene, and the view branch, which contains nodes that specify viewing related conditions. Usually, the content branch contains much larger number of nodes than the view branch.
The following image shows a basic Java® 3D graph scene, where the content branch is located on the left and the view branch on the right side of the graph:
Java® 3D applications construct individual graphic components as separate objects, called nodes, and connects them together into a tree-like scene graph, in which the objects and the viewing of them can easily be manipulated. The scene graph structure contains the description of the virtual universe, which represents the entire scene. All information concerning geometric objects, their attributes, position and orientation, as well as the viewing information are all contained into the scene graph.
The above scene graph consists of superstructure components, in particular a VirtualUniverse and a Locale object, and a two BranchGroup objects, which are attached to the superstructure. The one branch graph, rooted at the left BranchGroup node, is a content branch, containing all the relevant to the contents of the scene objects. The other branch, known as view branch, contains all the information related to the viewing and the rendering details of the scene.
The state of a shape node, or any other leaf node, is defined, during rendering, by the nodes that lie in the direct path between that node and the root node, i.e. the VirtualUniverse. For example, a TransformGroup node in a path between a leaf node and the scene’s root can change the position, orientation, and scale of the object represented by the leaf node.
SceneGraphObject Hierarchy
The Java® 3D node objects of a Java® 3D scene graph, which are instances of the Node class, may reference node component objects, which are instances of the class NodeComponent. The Node and NodeComponent classes are subclasses of the SceneGraphObject abstract class. Almost all objects that may be included in a scene graph are instances of subclasses of the SceneGraphObject class. A scene graph object is constructed by instantiating the corresponding class, and then, it can be accessed and manipulated using the provided set and get methods.
The following graph shows the class hierarchy of the major subclasses of the SceneGraphObject class:
Class Node and its subclasses
The abstract Class Node is the base class for almost all objects that constitute the scene graph. It has two subclasses the Group, and Leaf classes, which have many useful subclasses. Class Group is a superclass of, among others, the classes BranchGroup and TransformGroup. Class Leaf, which is used for nodes with no children, is a superclass of, among others, the classes Behavior, Light, Shape3D, and ViewPlatform. The ViewPlatform node is used to define from where the scene is viewed. In particular, it can be used to specify the location and the orientation of the point of view.
Class NodeComponent and its subclasses
Class NodeComponent is the base class for classes that represent attributes associated with the nodes of the scene graph. It is the superclass of all scene graph node component classes, such as the Appearance, Geometry, PointAttributes, and PolygonAttributes classes. NodeComponent objects are used to specify attributes for a node, such as the color and geometry of a shape node, i.e. a Shape3D node. In particular, a Shape3D node uses an Appearance and a Geometry objects, where the Appearance object is used to control how the associated geometry should be rendered by Java® 3D.
The geometry component information of a Shape3D node, i.e. its geometry and topology, can be specified in an instance of a subclass of the abstract Geometry class. A Geometry object is used as a component object of a Shape3D leaf node. Geometry objects consist of the following four generic geometric types. Each of these geometric types defines a visible object, or a set of objects.
For example, the GeometryArray is a subclass of the class Geometry, which itself extends the NodeComponent class, that is extended to create the various primitive types such as lines, triangle strips and quadrilaterals.
The IndexedGeometryArray object above contains separate integer arrays that index, among others, into arrays of positional coordinates specifying how vertices are connected to form geometry primitives. This class is extended to create the various indexed primitive types, such as IndexedLineArray, IndexedPointArray, and IndexedQuadArray.
Vertex data may be passed to the geometry array either by copying the data into the array using the existing methods, which is the default mode, or by passing a reference to the data.
The methods for setting positional coordinates, colors, normals, and texture coordinates, such as the method setCoordinates(), copy the data into the GeometryArray, which offers much flexibility in organizing the data.
Another set of methods allows data to be passed and accessed by reference, such as the setCoordRef3d() method, set a reference to user-supplied data, e.g. coordinate arrays. In order to enable the passing of data by reference, the BY__REFERENCE_ bit in the vertexFormat field of the constructor for the corresponding GeometryArray must be set accordingly. Data in any array that is referenced by a live or compiled GeometryArray object may only be modified using the updateData method assuming that the ALLOW_REF_DATA_WRITE capability bit is set accordingly, which can be done using the setCapability method.
The Appearance object defines all rendering state that control the way the associated geometry should be rendered. The rendering state consists of the following:
- Point attributes: a PointAttributes object defines attributes used to define points, such as the size to be used
- Line attributes: using a LineAttributes object attributes used to define lines, such as the width and pattern, can be defined
- Polygon attributes: using a PolygonAttributes object the attributes used to define polygons, such as rasterization mode (i.e.. filled, lines, or points) are defined
- Coloring attributes: a ColoringAttributes object is used to defines attributes used in color selection and shading.
- Rendering attributes: defines rendering operations, such as whether invisible objects are rendered, using a RenderingAttributes object.
- Transparency attributes: a TransparencyAttributes defines the attributes that affect transparency of the object
- Material: a Material object defines the appearance of an object under illumination, such as the ambient color, specular color, diffuse color, emissive color, and shininess. It is used to control the color of the shape.
- Texture: the texture image and filtering parameters used, when texture mapping is enabled, can be defined in a Texture object.
- Texture attributes: a TextureAttributes object can be used to define the attributes that apply to texture mapping, such as the texture mode, texture transform, blend color, and perspective correction mode.
- Texture coordinate generation: the attributes that apply to texture coordinate generation can be defined in a TexCoordGeneration object.
- Texture unit state: array that defines texture state for each of N separate texture units allowing multiple textures to be applied to geometry. Each TextureUnitState object contains a Texture object, TextureAttributes, and TexCoordGeneration object for one texture unit.
and Locale
After constructing a subgraph, it can be attached to a VirtualUniverse object through a high-resolution Locale object, which is itself attached to the virtual universe. The VirtualUniverse is the root of all Java® 3D scenes, while Locale objects are used for basic spatial placement. The attachment to a Locale object makes all objects in the attached subgraph live (i.e. drawable), while removing it from the locale reverses the effect. Any node added to a live scene graph becomes live. However, in order to be able to modify a live node the corresponding capability bits should be set accordingly.
Typically, a Java® 3D program has only one VirtualUniverse which consists of one, or more, Locale objects that may contain collections of subgraphs of the scene graph that are rooted by BranchGroup nodes, i.e. a large number of branch graphs. Although a Locale has no explicit children, it may reference an arbitrary number of BranchGroup nodes. The subgraphs contain all the scene graph nodes that exist in the universe. A Locale node is used to accurately position a branch graph in a universe specifying a location within the virtual universe using high-resolution coordinates (HiResCoord), which represent 768 bits of floating point values. A Locale is positioned in a single VirtualUniverse node using one of its constructors.
The VirtualUniverse and Locale classes, as well as the View class, are subclasses of the basic superclass Object, as shown below:
Branch Graphs
A branch graph is a scene graph rooted in a BranchGroup node and can be used to point to the root of a scene graph branch. A graph branch can be added to the list of branch graphs of a Locale node using its addBranchGraph(BranchGroup bg) method. BranchGroup objects are the only objects that can be inserted into a Locale’s list of objects.
A BranchGroup may be compiled by calling its compile method, which causes the entire subgraph to be compiled including any BranchGroup nodes that may be contained within the subgraph. A graph branch, rooted by a BranchGroup node, becomes live when inserted into a virtual universe by attaching it to a Locale. However, if a BranchGroup is contained in another subgraph as a child of some other group node, it may not be attached to a Locale node.
Capability Bits, Making Live and Compiling
Certain optimizations can be done to achieve better performance by compiling a subgraph into an optimized internal format, prior to its attachment to a virtual universe. However, many set and get methods of objects that are part of a live or compiled scene graph cannot be accessed. In general, the set and get methods can be used only during the creation of a scene graph, except where explicitly allowed, in order to allow certain optimizations during rendering. The set and get methods that can be used when the object is live or compiled should be specified using a set of capability bits, which by default are disabled, prior to compiling or making live the object. The methods isCompiled() and isLive() can be used to find out whether a scene graph object is compiled or live. The methods setCapability() and getCapability() can be used to set properly the capability bits to allow access to the object’s methods. However, the less the capability bits that are enabled, the more optimizations can be performed during rendering.
Viewing Branch: ViewPlatform, View, Screen3D
The view branch has usually the following structure, consisting of nodes that control the viewing of the scene.
The view branch contains some scene graph viewing objects that can be used to define the viewing parameters and details, such as the ViewPlatform, View, Screen3D, PhysicalBody, and PhysicalEnvironment classes.
Java® 3D uses a viewing model that can be used to transform the position and direction of the viewing while the content branch remains unmodified. This is achieved with the use of the ViewPlatform and the View classes, to specify from where and how, respectively, the scene is being viewed.
The ViewPlatform node controls the position, orientation and scale of the viewer. A viewer can navigate through the virtual universe by changing the transformation in the scene graph hierarchy above the ViewPlatform node. The location of the viewer can be set using a TransformGroup node above the ViewPlatform node. The ViewPlatform node has an activation radius that is used together with the bounding volumes of Behavior, Background and other nodes in order to determine whether the latter nodes should be scheduled, or turned on, respectively. The method setActivationRadius() can be used to set the activation radius.
A View object connects to the ViewPlatform node in the scene graph, and specifies all viewing parameters of the rendering process of a 3D scene. Although it exists outside of the scene graph, it attaches to a ViewPlatform leaf node in the scene graph, using the method attachViewPlatform(ViewPlatform vp). A View object contains references to a PhysicalBody and a PhysicalEnvironment object, which can be set using the methods setPhysicalBody() and setPhysicalEnvironment(), respectively.
A View object contains a list of Canvas3D objects where rendering of the view is done. The method addCanvas3D(Canvas3D c) of the class View can be used to add the provided Canvas3D object to the list of canvases of the View object.
Class Canvas3D extends the heavyweight class Canvas in order to achieve hardware acceleration, since a low rendering library, such as OpenGL, requires the rendering to be done in a native window to enable hardware acceleration.
Finally, all Canvas3D objects on the same physical display device refer to a Screen3D object, which contains all information about that particular display device. Screen3D can be obtained from the Canvas3D using the getScreen3D() method.
Default Coordinate System
The default coordinate system is a right-handed Cartesian coordinate system centered on the screen with the x and y-axes directed towards the right and top of the screen, respectively. The z-axis is, by default directed out of the screen towards the viewer, as shown below. The default distances are in meter and the angles in radians.
Transformations
Class TransformGroup which extends the class Group can be used to set a spatial transformation, such as positioning, orientation, and scaling of its children through the use of a Transform3D object. A TransformGroup node enables the setting and use of a coordinate system relative to its parent coordinate system.
The Transform3D object of a TransformGroup object can be set using the method setTransform(Transform3D t), which is used to set the transformation components of the Transform3D object to the ones of the passed parameter.
A Transform3D object is a 4x4 double-precision matrix that is used to determine the transformations of a TransformGroup node, as shown in the following equation. The elements T{{< sub “00” >}}, T{{< sub “01” >}}, T{{< sub “02” >}}, T{{< sub “10” >}}, T{{< sub “11” >}}, T{{< sub “12” >}},T{{< sub “20” >}}, T{{< sub “21” >}}, and T{{< sub “22” >}} are used to set the rotation and scaling, and the T{{< sub “03” >}}, T{{< sub “13” >}}, and T{{< sub “23” >}} are used to set the translation.
As the scene graph is traversed by the Java® 3D renderer, the transformations specified by any transformation nodes accumulate. The transformations closer to the geometry nodes executed prior to the ones closer to the virtual universe node.
{{< anchor “5” >}}{{< /anchor >}}5. A Simple Java® 3D Program
A Java® 3D program builds a scene graph, using Java® 3D classes and methods, that can be rendered onto the screen.
The following program creates 2 color cubes and a sphere as shown to the snapshot that follows the code.
import java.awt.*;
import javax.swing.*;
import javax.media.j3d.*;
import javax.vecmath.*;
import java.awt.event.*;
import com.sun.j3d.utils.geometry.*;
public class MyJava3D extends JFrame
{
// Virtual Universe object.
private VirtualUniverse universe;
// Locale of the scene graph.
private Locale locale;
// BranchGroup for the Content Branch of the scene
private BranchGroup contentBranch;
// TransformGroup node of the scene contents
private TransformGroup contentsTransGr;
// BranchGroup for the View Branch of the scene
private BranchGroup viewBranch;
// ViewPlatform node, defines from where the scene is viewed.
private ViewPlatform viewPlatform;
// Transform group for the ViewPlatform node
private TransformGroup vpTransGr;
// View node, defines the View parameters.
private View view;
// A PhysicalBody object can specify the user’s head
PhysicalBody body;
// A PhysicalEnvironment object can specify the physical
// environment in which the view will be generated
PhysicalEnvironment environment;
// Drawing canvas for 3D rendering
private Canvas3D canvas;
// Screen3D Object contains screen’s information
private Screen3D screen;
private Bounds bounds;
public MyJava3D()
{
super(“My First Java3D Example”);
// Creating and setting the Canvas3D
canvas = new Canvas3D(null);
getContentPane().setLayout( new BorderLayout( ) );
getContentPane().add(canvas, “Center”);
// Setting the VirtualUniverse and the Locale nodes
setUniverse();
// Setting the content branch
setContent();
// Setting the view branch
setViewing();
// To avoid problems between Java3D and Swing
JPopupMenu.setDefaultLightWeightPopupEnabled(false);
// enabling window closing
addWindowListener(new WindowAdapter() {
public void windowClosing(WindowEvent e)
{System.exit(0); } });
setSize(600, 600);
bounds = new BoundingSphere(new Point3d(0.0,0.0,0.0), Double.MAX_VALUE);
}
private void setUniverse()
{
// Creating the VirtualUniverse and the Locale nodes
universe = new VirtualUniverse();
locale = new Locale(universe);
}
private void setContent()
{
// Creating the content branch
contentsTransGr = new TransformGroup();
contentsTransGr.setCapability(TransformGroup.ALLOW_TRANSFORM_WRITE);
setLighting();
ColorCube cube1 = new ColorCube(0.1);
Appearance appearance = new Appearance();
cube1.setAppearance(appearance);
contentsTransGr.addChild(cube1);
ColorCube cube2 = new ColorCube(0.25);
Transform3D t1 = new Transform3D();
t1.rotZ(0.5);
Transform3D t2 = new Transform3D();
t2.set(new Vector3f(0.7f, 0.6f,-1.0f));
t2.mul(t1);
TransformGroup trans2 = new TransformGroup(t2);
trans2.addChild(cube2);
contentsTransGr.addChild(trans2);
Sphere sphere = new Sphere(0.2f);
Transform3D t3 = new Transform3D();
t3.set(new Vector3f(-0.2f, 0.5f,-0.2f));
TransformGroup trans3 = new TransformGroup(t3);
Appearance appearance3 = new Appearance();
Material mat = new Material();
mat.setEmissiveColor(-0.2f, 1.5f, 0.1f);
mat.setShininess(5.0f);
appearance3.setMaterial(mat);
sphere.setAppearance(appearance3);
trans3.addChild(sphere);
contentsTransGr.addChild(trans3);
contentBranch = new BranchGroup();
contentBranch.addChild(contentsTransGr);
// Compiling the branch graph before making it live
contentBranch .compile();
// Adding a branch graph into a locale makes its nodes live (drawable)
locale.addBranchGraph(contentBranch);
}
private void setLighting()
{
AmbientLight ambientLight = new AmbientLight();
ambientLight.setEnable(true);
ambientLight.setColor(new Color3f(0.10f, 0.1f, 1.0f) );
ambientLight.setCapability(AmbientLight.ALLOW_STATE_READ);
ambientLight.setCapability(AmbientLight.ALLOW_STATE_WRITE);
ambientLight.setInfluencingBounds(bounds);
contentsTransGr.addChild(ambientLight);
DirectionalLight dirLight = new DirectionalLight();
dirLight.setEnable(true);
dirLight.setColor( new Color3f( 1.0f, 0.0f, 0.0f ) );
dirLight.setDirection( new Vector3f( 1.0f, -0.5f, -0.5f ) );
dirLight.setCapability( AmbientLight.ALLOW_STATE_WRITE );
dirLight.setInfluencingBounds(bounds);
contentsTransGr.addChild(dirLight);
}
private void setViewing()
{
// Creating the viewing branch
viewBranch = new BranchGroup();
// Setting the viewPlatform
viewPlatform = new ViewPlatform();
viewPlatform.setActivationRadius(Float.MAX_VALUE);
viewPlatform.setBounds(bounds);
Transform3D t = new Transform3D();
t.set(new Vector3f(0.3f, 0.7f, 3.0f));
vpTransGr = new TransformGroup(t);
// Node capabilities control (granding permission) read and write access
// after a node is live or compiled
// The number of capabilities small to allow more optimizations during compilation
vpTransGr.setCapability(TransformGroup.ALLOW_TRANSFORM_WRITE);
vpTransGr.setCapability( TransformGroup.ALLOW_TRANSFORM_READ);
vpTransGr.addChild(viewPlatform);
viewBranch.addChild(vpTransGr);
// Setting the view
view = new View();
view.setProjectionPolicy(View.PERSPECTIVE_PROJECTION );
view.addCanvas3D(canvas);
body = new PhysicalBody();
view.setPhysicalBody(body);
environment = new PhysicalEnvironment();
view.setPhysicalEnvironment(environment);
view.attachViewPlatform(viewPlatform);
view.setWindowResizePolicy(View.PHYSICAL_WORLD);
locale.addBranchGraph(viewBranch);
}
public static void main(String[] args)
{
JFrame frame = new MyJava3D();
frame.setVisible(true);
}
}
A utility class, called SimpleUniverse, can alternatively be used to automatically build a common arrangement of a universe, locale, and viewing classes, avoiding the need to create explicitly the viewing branch. Then, a branch is added into the simple universe to make its nodes live (i.e. drawable).
SimpleUniverse simpleUniverse = new SimpleUniverse(canvas);
simpleUniverse.addBranchGraph(contentBranch);
6. More on Java® 3D
Java® 3D and Swing
Since the Canvas3D extends the heavyweight AWT class Canvas, it should be handled with care if Swing is used. The information provided when mixing AWT and Swing components should be followed. The main problem is that there is one-to-one correspondence between heavyweight components and their window system peers, i.e. native OS window components. In contrast, a lightweight component expects to use the peer of its enclosing container since it does not have a peer.
When lightweight components overlap with heavyweight components, the heavyweight components are always painted on top. In general, the heavyweight Canvas3D of Java® 3D should be kept apart from lightweight Swing components using different containers to avoid problems.
To avoid heavyweight components overlapping Swing popup menus, which are lightweight, the popup menus can be forced to be heavyweight using the method setLightWeightPopupEnabled() of the JPopupMenu class.
Similarly, problems with tooltips can be avoided by invoking the following method
ToolTipManager.sharedInstance().setLightWeightPopupEnabled(false)
Behaviors
Behaviors are essentially Java® methods that are scheduled to run only when certain requirements are satisfied according to wakeup conditions. Although a Behavior object is connected to the scene it is kept in a separate area of the Java® 3D runtime environment and it is not considered part of the scene graph. The runtime environment treats a Behavior object differently ensuring that certain actions take place. All behaviors in Java® 3D extend the Behavior class, which is an abstract class that itself extends the Leaf class. The Behavior class provides a way to execute certain statements, provided in the processStimulus() method, in order to modify the scene graph when specified criteria are satisfied.
The Behavior class has two major methods, in particular the initialize() method, which is called when the behavior becomes live, and the processStimulus() method, which is called by the Java® 3D scheduler whenever appropriate, and a scheduling region. Typically, in order to create a custom behavior, the class Behavior is extended and referenced by the scene graph from an appropriate place that should be able to effect. A custom behavior that extends the Behavior class should implement the initialize() and processStimulus() methods, and provide other methods and constructors that may be needed. The Behavior class object contains the state information that is needed by its initialize()and the processStimulus() methods. A constructor or another method may be used to set references to the scene graph objects upon which the behavior acts. In addition, the Behavior class is extended by the following three classes Billboard, Interpolator, and LOD.
The initialize() method is called once when the behavior becomes “live”, i.e. when its BranchGroup node is added to a VirtualUniverse, to initialize this behavior. The Java® 3D behavior scheduler calls the initialize() method, which should never be called directly. The initialize() method is used to set a Behavior object, which has been “added” to the scene graph, into a “known” condition and register the criteria to be used to decide on its execution. Classes that extend Behavior must provide their own initialize() method. The initialize() method allows a Behavior object to initialize its internal state and specify its initial wakeup conditions. Java® 3D automatically invokes a behavior’s initialize code when a BranchGroup node that contains the behavior is added to the virtual universe, i.e. becomes live. The initialize() method should return, since Java® 3D does not invoke the initialize method in a new thread, and therefore it must regain control. Finally, a wakeup condition must be set in order to be able to invoke the processStimulus() method of the behavior.
However, a Behavior object is considered active only when its scheduling bounds intersect the activation volume of a ViewPlatform node. Therefore, the scheduling bounds should be provided for a behavior in order to be able to receive stimuli. The scheduling bounds of a behavior can be specified as a bounded spatial volume, such as a sphere, using the method setSchedulingBounds(Bounds region). Bounds are used for selective scheduling to improve performance. Bounds are used to decide whether a behavior should be added to the list of scheduled behaviors.
The processStimulus() method is called whenever the wakeup criteria are satisfied and the ViewPlatform’s activation region intersect with the Behavior’s scheduling region. The method is called by the Java® 3D behavior scheduler when something happens that causes the behavior to execute. A stimulus, i.e. a notification, informs the behavior that it should execute its processStimulus() method. Therefore, applications should not call explicitly this method. Classes that extend the Behavior class must provide their own processStimulus() method. The scheduling region defines a spatial volume that serves to enable the scheduling of Behavior nodes. A Behavior node is active, i.e. it can receive stimuli, whenever its scheduling region intersects the activation volume of a ViewPlatform.
The Java® 3D behavior scheduler invokes the processStimulus() method of a Behavior node when its scheduling region intersects the activation volume of a ViewPlatform node and all wakeup criteria of that behavior are satisfied. Then, the statements in the processStimulus() method may perform any computations and actions, such as including the registration of state change information that could cause Java® 3D to wake other Behavior objects and modify node values within the scene graph, change the internal state of the behavior, specify its next wakeup conditions, and exit. It is allowed to a Behavior object to change its next trigger event. The processStimulus() method, typically, manipulates scene graph elements, as long as the associated capabilities bits are set accordingly. For example, a Behavior node can be used to repeatedly modify a TransformGroup node in order to animate the associated with the TransformGroup node objects.
The amount of work done in a processStimulus() method should be limited since the method may lower the frame rate of the renderer. Java® 3D assumes that Behavior methods run to completion and if necessary they spawn threads.
The application must provide the Behavior object with references to those scene graph elements that the Behavior object will manipulate. This is achieved by providing those references as arguments to the constructor of the behavior when the Behavior object is created. Alternatively, the Behavior object itself can obtain access to the relevant scene graph elements either when Java® 3D invokes its initialize() method or each time Java® 3D invokes its processStimulus() method. Typically, the application provides references to the scene graph objects that a behavior should be able to access as arguments to its constructor when the Behavior is instantiated.
Java® 3D assumes that Behavior methods always run to completion and that if needed they can spawn threads. The structure of each Behavior method consists of the following parts:
- code to decode and extract references from the WakeupCondition enumeration that awoke the object
- code to perform the manipulations associated with the WakeupCondition
- code to establish new WakeupCondition for this behavior
- a path to exit, so that execution returns to the Java® 3D behavior scheduler
The WakeupCondition class is an abstract class that specifies a single wakeup condition. It is specialized to 14 different WakeupCriterion subclasses and to 4 subclasses that can be used to create complex wakeup conditions using boolean logic combinations of individual WakeupCriterion objects. A Behavior node provides a WakeupCondition object to the Java® 3D behavior scheduler using its wakeupOn() method. When that WakeupCondition is satisfied, while the scheduling region intersects the activation volume of a ViewPlatform node, the behavior scheduler passes that same WakeupCondition back to the Behavior via an enumeration.
Java® 3D provides the following wakeup criteria that Behavior objects can use to specify a complex WakeupCondition. All of the following are subclasses of the WakeupCriterion class, which itself is a subclass of the WakeupCondition class.
- WakeupOnViewPlatformEntry: when the center of a ViewPlatform enters a specified region
- WakeupOnViewPlatformExit: when the center of a ViewPlatform exits a specified region
- WakeupOnActivation: when a behavior is activated
- WakeupOnDeactivation: when a behavior is deactivated
- WakeupOnTransformChange: when a specified TransformGroup node’s transform changes
- WakeupOnCollisionEntry: when collision is detected between a specified Shape3D node’s Geometry object and any other object
- WakeupOnCollisionMovement: when movement occurs between a specified Shape3D node’s Geometry object and any other object with which it collides
- WakeupOnCollisionExit: when a specified Shape3D node’s Geometry object no longer collides with any other object
- WakeupOnBehaviorPost: when a specified Behavior object posts a specific event
- WakeupOnAWTEvent: when a specified AWT event occurs, such as. a mouse press
- WakeupOnElapsedTime: when a specified time interval elapses
- WakeupOnElapsedFrames: when a specified number of frames have been drawn
- WakeupOnSensorEntry: when the center of a specified Sensor enters a specified region
- WakeupOnSensorExit: when the center of a specified Sensor exits a specified region
A Behavior object constructs a WakeupCriterion by providing the appropriate arguments, such as a reference to some scene graph object and a region of interest.
Multiple criteria can be combined using the following classes to form complex wakeup conditions.
- WakeupOr: specifies any number of wakeup conditions logically ORed together
- WakeupAnd: specifies any number of wakeup conditions logically ANDed together
- WakeupOrOfAnds: specifies any number of OR wakeup conditions logically ANDed together
- WakeupAndOfOr: specifies any number of AND wakeup conditions logically ORed together
The class hierarchy of the WakeupCondition class is shown below:
The following code provides an example of setting a WakeupCondition object
public void initialize()
{
WakeupCriterion criteria[] = new WakeupCriterion[2];
criteria[0] = new WakeupOnElapsedFrames(3);
criteria[0] = new WakeupOnElapsedTime(500);
WakeupCondition condition = new WakeupOr(criteria);
wakeupOn(condition);
}
A Behavior node provides a WakeupCondition object to the behavior scheduler via its wakeupOn() method and the behavior scheduler provides an enumeration of that WakeupCondition. The wakeupOn() method should be called from the initialize() and processStimulus() methods, just prior of exiting these methods.
In the current Java® 3D implementation the behavior scheduler, and, therefore, the processStimulus method of the Behavior class as well, run concurrently with the rendering thread. However, a new thread will not start until both the renderer, which may be working on the previous frame, and the behavior scheduler are done.
Java® 3D guarantees that all behaviors with a WakeupOnElapsedFrames will be executed before the next frame starts rendering, i.e. the rendering thread will wait until all behaviors are done with their processStimulus methods before drawing the next frame. In addition, Java® 3D guarantees that all scene graph updates that occur from within a single Behavior object will be reflected in the same frame for consistency purposes.
Finally, Interpolator objects can be used for simple behaviors where a parameter can be varied between a starting and an ending value during a certain time interval.
Lights
Lights can be used in order to achieve higher quality and realism of the graphics. Lighting capabilities is provided by the class Light and its subclasses. All light objects have a color, an on/off state, and a bounding volume that controls their illumination range. Java3D provides the following four types of lights, which are subclasses of the class Light:
- AmbientLight: the rays from an ambient light source object come from all directions illuminating shapes evenly
- DirectionalLight: a directional light source object has parallel rays of light aiming at a certain direction
- PointLight: the rays from an point light source object are emitted radially from a point to all directions
- SpotLight: the rays from a spot light source object are emitted radially from a point to all directions but within a cone
{{< anchor “6” >}}{{< /anchor >}}6. Performance of Java® 3D
Java® 3D aims at achieving high performance by utilizing the available graphics libraries (OpenGL/Direct3D), using 3D-graphics acceleration where available, and supporting some rendering optimizations (such as scene reorganization and content culling). It is optimized for performance rather than quality of image rendering. Compilation of branch groups and utilization of capability bits enable speed optimizations. It is as fast and high-level as Open Inventor and VRML (Virtual Reality Modeling Language), while it offers the portability of Java® and the direct access and well integration with all other available Java® APIs. Java® 3D uses native code of certain libraries, such as the OpenGL, at the final steps of rendering to achieve satisfactory performance levels. A scene reorganization and a content culling may be used by the renderer to optimize rendering by following an optimal order that bypasses hidden parts of the scene.
Java® 3D rendering is tuned to the underlying hardware utilizing a wide range of hardware and software platforms. Java® 3D is scalable, taking advantage of multithreading capabilities of Java® when multiple processors are available. The availability of multiple processors is automatically utilized by its independent and asynchronous components, such as the rendering thread and the behavior scheduler, that can be assigned to different processors. Also, branches of the scene tree-structure can be manipulated independently and concurrently utilizing multithreading and multiprocessing.
A thread scheduler was implemented inside the Java® 3D, providing to the Java® 3D architects full control of all threads and eliminating the need to deal with thread priorities. the underlying architecture uses messages to propagate scnegraph changes into certain structures that are used to optimize a particular functionality. There are two structures for geometric objects. The one organizes the geometry spatially enabling spatial queries on the scene graph, such as picking, collisions, culling etc. The other structure is a state snapshot of the scene graph, known as render bin, which is associated with each view and is used by the renderer thread. There is also a structure associated with behaviors that spatially organizes behavior nodes, and a behavior scheduler thread that executes behaviors that need to be executed.
The thread scheduler is essentially in a big infinite loop implemented inside the Java® 3D. For each iteration, the thread scheduler runs each thread that needs to be run once, waiting for all threads to be completed before entering the next iteration. The behavior and the rendering threads may run once in a single iteration. The following operations are conceptually performed within this infinite loop.
while(true)
{
process input
if(there is a request for exit)
break
perform any behaviors
transverse scene graph and render visible objects
}
Whenever a node of the scene graph is modified a message is generated with a value associated with it and any state necessary to reflect the specific changes, and queued with all other messages by the thread scheduler. At each iteration the messages are processed and the various structures are updated accordingly. The update time is very fast when the messages are very simple, which is typically the case. In the current implementation the rendering thread and the behavior thread can run concurrently. In particular, the behavior scheduler and therefore the processStimulus method of a Behavior object, can run concurrently with the renderer. However, a new frame will not start until both the rendering of the previous frame and the behavior scheduler are done.
Finally it offers level-of-detail (LOD) capabilities to further improve performance using a LOD object. An LOD leaf node is an abstract class that operates on a list of Switch group nodes to select one of the children of the Switch nodes. The LOD class is extended to implement various selection criteria, such as the DistanceLOD subclass.
|
common_crawl_ocw.mit.edu_56
|
Topics
1. Interfaces
An interface declares a set of methods and constants, without actually providing an implementation for any of those methods. A class is said to implement an interface if it provides definitions for all of the methods declared in the interface.
Interfaces provide a way to prescribe the behavior that a class must have. In this sense, an interface bears some resemblance to an abstract class. An abstract class may contain default implementations for some of its methods; it is an incomplete class that must be specialized by subclassing. By constrast, an interface does not provide default implementations for any of the methods. It is just a way of specifying the functions that a class should contain. There is no notion of specialization through function overriding.
Some points to note about interfaces:
A class may implement more than one interface, whereas it can only extend one parent class.
An interface is treated as a reference type.
Interfaces provide a mechanism for callbacks, rather like pointers to functions in C++.
An interface can extend another interface.
Here is an example of using an interface.
import java.util.*;
interface Collection {
final int MAXIMUM = 100; // An interface can only have constant data.
public void add(Object obj);
public void remove();
public void print();
}
class Stack implements Collection { // A last in first out (LIFO) process.
Vector mVector;
public Stack() {
mVector = new Vector(0); // Create an empty vector.
}
// This adds an element to the top of the stack.
public void add(Object obj) {
if (mVector.size() < MAXIMUM) // Restrict the size of the Stack.
mVector.insertElementAt(obj, 0);
else
System.out.println(“Reached maximum size”);
}
// This removes an element from the top of the stack.
public void remove() {
mVector.removeElementAt(0);
}
// This prints out the stack in order from top to bottom.
public void print() {
System.out.println(“Printing out the stack”);
for (int i = 0; i < mVector.size(); i++)
System.out.println(mVector.elementAt(i));
}
}
class Queue implements Collection { // A first in first out (FIFO) process.
Vector mVector;
public Queue() {
mVector = new Vector(0); // Create an empty vector.
}
// This adds an element to the bottom of the queue.
public void add(Object obj) {
if (mVector.size() < MAXIMUM) // Restrict the size of the Queue.
mVector.addElement(obj);
else
System.out.println(“Reached maximum size”);
}
// This removes an element from the top of the queue.
public void remove() {
mVector.removeElementAt(0);
}
// This prints out the queue in order from top to bottom.
public void print() {
System.out.println(“Printing out the queue”);
for (int i = 0; i < mVector.size(); i++)
System.out.println(mVector.elementAt(i));
}
}
class Main {
public static void main(String[] args) {
// Create a stack and add some objects to it. The function CreateSomeObjects takes a
// reference to the Collection interface as an argument, so it does not need to know anything
// about the Stack class except that it supplies all the methods that the Collection interface
// requires. This is an example of using callbacks.
Stack s = new Stack();
CreateSomeObjects(s);
// Remove an element from the stack and then print it out.
s.remove();
s.print(); // This will print out the elements 3,7,5.
// Create a queue and add some objects to it.
Queue q = new Queue();
CreateSomeObjects(q);
// Remove an element from the queue and then print it out.
q.remove();
q.print(); // This will print out the elements 7,3,4.
}
// Create some objects and add them to a collection. Class Integer allows us to create integer
// objects from the corresponding primitive type, int.
public static void CreateSomeObjects(Collection c) {
c.add(new Integer(5));
c.add(new Integer(7));
c.add(new Integer(3));
c.add(new Integer(4));
}
}
2. Exceptions and Error Handling
What happens when a program encounters a run-time error? Should it exit immediately or should it try to recover? The behavior that is desired may vary depending on how serious the error is. A “file not found” error may not be a reason to terminate the program, whereas an “out of memory error” may. One way to keep track of errors is to return an error code from each function. Exceptions provide an alternative way to handle errors.
The basic idea behind exceptions is as follows. Any method with the potential to produce a remediable error should declare the type of error that it can produce using the throws keyword. The basic remediable error type is class Exception, but one may be more specific about the type of exception that can be thrown e.g. IOException refers to an exception thrown during an input or output operation. When an exception occurs, we use the throw keyword to actually create the Exception object and exit the function.
Code that has the potential to produce an exception should be placed within the try block of a try-catch statement. If the code succeeds, then control passes to the next statement following the try-catch statement. If the code within the try block fails, then the code within the catch block is executed. The following example illustrates this.
class LetterTest {
char readLetter() throws Exception { // Indicates type of exception thrown.
int k;
k = System.in.read();
if (k < ‘A’ || k > ‘z’) {
throw new Exception(); // Throw an exception.
}
return (char)k;
}
public static void main(String[] args) {
LetterTest a = new LetterTest();
try {
char c = a.readLetter();
String str;
str = “Successfully read letter " + c;
System.out.println(str);
}
catch (Exception e) { // Handle the exception.
System.out.println(“Failed to read letter.”);
}
}
}
Note: in addition to the Exception class, Java® also provides an Error class, which is reserved for those kinds of problems that a reasonable program should not try to catch.
|
common_crawl_ocw.mit.edu_57
|
Topics
Related Links
Java® Beans trail in the Java® Tutorial- a good introduction to Java® Beans.
Java® Beans Development Kit (BDK)- provides a basic development support tool (called the BeanBox) as well as several examples of Java® Bean components. This link also provides links to various commercial development environments for Java® Beans.
Java® Beans API- various interfaces, classes and exception types that you will encounter when developing Java® Beans.
1. Introduction
A Java® Bean is a reusable software component that can be manipulated visually in an application builder tool. The idea is that one can start with a collection of such components, and quickly wire them together to form complex programs without actually writing any new code.
Software components must, in general, adopt standard techniques for interacting with the rest of the world. For example, all GUI components inherit the java.awt.Component class, which means that one can rely on them to have certain standard methods like paint(), setSize(), etc. Java® Beans are not actually required to inherit a particular base class or implement a particular interface. However, they do provide support for some or all of the following key features:
- Support for introspection. Introspection is the process by which an application builder discovers the properties, methods and events that are associated with a Java® Bean.
- Support for properties. These are basically member variables that control the appearance or behavior of the Java® Bean.
- Support for customization of the appearance and behavior of a Java® Bean.
- Support for events. This is a mechanism by which Java® Beans can communicate with one another.
- Support for persistent storage. Persistence refers to the abilility to save the current state of an object, so that it can be restored at a later time.
2. The BeanBox
This is a basic tool that Sun provides for testing Java® Beans. To run the BeanBox, your computer needs to have access to a BDK installation. To run the BeanBox, go to the beans/beanbox subdirectory and then type run. This will bring up three windows:
- The ToolBox window gives you a palette of sample Java® Beans to choose from.
- The BeanBox window is a container within which you can visually wire beans together.
- The Properties window allows you to edit the properties of the currently selected Java® Bean.
Try a simple example: choose Juggler bean from the ToolBox and drop an instance in the BeanBox window. Also create two instances of OurButton. Edit the labels of the buttons to read start and stop using the Properties window. Now wire the start button to the juggler as follows. Select the start button, then go to Edit | Events | action | actionPerformed. Connect the rubber band to the juggler. You will now see an EventTargetDialog box with a list of Juggler methods that could be invoked when the start button is pressed (these are the methods that either take an ActionEvent as an argument or have no arguments at all.) Choose startJuggling as the target method and press OK. The BeanBox now generates an adaptor class to wire the start button to the juggler. Wire the stop button to the juggler’s stopJuggling method in a similar manner.
Now that the program has been designed, you can run it within the BeanBox. Simply press the start button to start juggling and press the stop button to stop juggling. If you wish, you can turn your program into an applet by choosing File | MakeApplet in the BeanBox. This will automatically generate a complete set of files for the applet, which can be run in the appletviewer. (Do not expect current versions of Netscape® and Internet Explorer to work with this applet.)
Let’s take a closer look at how the BeanBox works. On start up, it scans the directory beans/jars for files with the .jar extension that contain Java® Beans. These beans are displayed in the ToolBox window, from where they can be selected and dropped into the BeanBox window. Next, we edited the labels of the two instances of OurButton. The BeanBox determined that OurButton has a member named label by looking for setter and getter methods that follow standard naming conventions called design patterns. If you look at the source code in beans\demo\sunw\demo\buttons\OurButton.java, you will see that OurButton has two methods named
public void setLabel(String newLabel) {
…
}
public String getLabel() {
…
}
Design patterns are an implicit technique by which builder tools can introspect a Java® Bean. There is also an explicit technique for exposing properties, methods and events. This involves writing a bean information class, which implements the BeanInfo interface.
When we wired the start button to the juggler, the BeanBox set up the juggler to respond to action events generated by the start button. The BeanBox again used design patterns to determine the type of events that can be generated by an OurButton object. The following design patterns indicate that OurButton is capable of firing ActionEvents.
public synchronized void addActionListener(ActionListener l) {
…
}
public synchronized void removeActionListener(ActionListener l) {
…
}
By choosing Edit | Events | action | actionPerformed to connect the start button to the juggler, we were really registering an ActionListener with the start button. The Juggler bean itself is does not implement the ActionListener interface. Instead the BeanBox generated an event hookup adaptor, which implements ActionListener and simply calls the juggler’s startJuggling method in its actionPerformed method:
// Automatically generated event hookup file.
package tmp.sunw.beanbox;
import sunw.demo.juggler.Juggler;
import java.awt.event.ActionListener;
import java.awt.event.ActionEvent;
public class ___Hookup_1474c0159e implements
java.awt.event.ActionListener, java.io.Serializable {
public void setTarget(sunw.demo.juggler.Juggler t) {
target = t;
}
public void actionPerformed(java.awt.event.ActionEvent arg0) {
target.startJuggling(arg0);
}
private sunw.demo.juggler.Juggler target;
}
A similar event hookup adaptor was generated when we wired the stop button to the juggler’s stopJuggling method.
Why not make Juggler implement the ActionListener interface directly? This is mainly a matter of convenience. Suppose that Juggler implemented ActionListener and that it was registered to receive _ActionEvent_s from both the start button and the stop button. Then the Juggler’s actionPerformed method would need to examine incoming events to determine the event source, before it could know whether to call startJuggling or stopJuggling.
3. Creating a Java® Bean
This example illustrates how to create a simple Java® Bean. Java Bean classes must be made serializable so that they support persistent storage. To make use of the default serialization capabilities in Java®, the class needs to implement the Serializable interface or inherit a class that implements the Serializable interface. Note that the Serializable interface does not have any methods. It just serves as a flag to say that the designer has tested the class to make sure it works with default serialization. Here is the code:
SimpleBean.java
import java.awt.*;
import java.io.Serializable;
public class SimpleBean extends Canvas implements Serializable {
//Constructor sets inherited properties
public SimpleBean() {
setSize(60,40);
setBackground(Color.red);
}
}
Since this class extends a GUI component, java.awt.Canvas, it will be a visible Java® Bean. Java® Beans may also be invisible.
Now the Java® Bean must be compiled and packaged into a JAR file. First run the compiler:
javac SimpleBean.java
Then create a manifest file
manifest.tmp
Name: SimpleBean.class
Java-Bean: True
Finally create the JAR file:
jar cfmv SimpleBean.jar manifest.tmp SimpleBean.class
The JAR file can now be placed in the beans/jars so that the BeanBox will find it on startup, or it can be loaded subsequently by choosing File | LoadJar.
4. Support for Properties and Events
This example builds on the SimpleBean class. It illustrates how to add customizable properties to a Java® Bean and how to generate and receive property change events.
SimpleBean.java
import java.awt.*;
import java.io.Serializable;
import java.beans.*;
public class SimpleBean extends Canvas implements Serializable,
java.beans.PropertyChangeListener {
// Constructor sets inherited properties
public SimpleBean() {
setSize(60,40);
setBackground(Color.red);
}
// This section illustrates how to add customizable properties to the Java Bean. The names
// of the property setter and getter methods must follow specific design patterns that allow
// the BeanBox (or builder tool) to determine the name of the property variable upon
// introspection.
private Color beanColor = Color.green;
public void setBeanColor(Color newColor) {
Color oldColor = beanColor;
beanColor = newColor;
repaint();
// This relates to bound property support (see below).
changes.firePropertyChange(“beanColor”, oldColor, newColor);
}
public Color getBeanColor() {
return beanColor;
}
public void paint(Graphics g) {
g.setColor(beanColor);
g.fillRect(20,5,20,30);
}
// This section illustrates how to implement bound property support. Bound property
// support allows other objects to respond when a property change occurs in this Java
// Bean. Remember that each property setter method must fire a property change event,
// so that registered listeners can be properly notified. The addPropertyChangeListener
// and removePropertyChangeListener methods follow design patterns that indicate the
// ability of this Java Bean to generate property change events. As it happens, these
// methods overide methods of the same names, which are inherited through java.awt.Canvas.
private PropertyChangeSupport changes = new PropertyChangeSupport(this);
public void addPropertyChangeListener(PropertyChangeListener l) {
changes.addPropertyChangeListener(l);
}
public void removePropertyChangeListener(PropertyChangeListener l) {
changes.removePropertyChangeListener(l);
}
// This section illustrates how to implement a bound property listener, which will allow this
// Java Bean to register itself to receive property change events fired by other objects.
// Registration simply involves making a call to the other object’s addPropertyChangeListener
// method with this Java Bean as the argument. If you are using the BeanBox, however, you
// will typically use the event hookup adaptor mechanism to receive the events. In this case,
// you can set the target method to be the propertyChange method. (Another word about the
// BeanBox: the Edit | bind property option is a useful way to make a property change in one
// object automatically trigger a property change in another object. In this case, the BeanBox
// will invoke the correct property setter using code in sunw/beanbox/PropertyHookup.java.
// An adaptor class will not be generated in this case.)
public void propertyChange(PropertyChangeEvent evt) {
String propertyName = evt.getPropertyName();
System.out.println(“Received property change event " + propertyName);
}
}
|
common_crawl_ocw.mit.edu_58
|
Topics
1. Java® RMI
(Ref: Just Java® 1.2 Ch 16)
The Java® Remote Method Invokation (RMI) framework provides a simple way for Java® programs to communicate with each other. It allows a client program to call methods that belong to a remote object, which lives on a server located elsewhere on the network. The client program can pass arguments to the methods of the remote object and obtain return values, as seamlessly as invoking a method of a local object.
The operation of a remote method call is as follows. The client program actually calls into a dummy method, called a stub, which resides locally. The stub gets the function arguments, serializes them and then sends them over the network to the server. On the server side, a corresponding bare-bones method (called a skeleton) deserializes the argument objects and passes them onto the real server method. This process is reversed in order to send the result back to the client.
The Interface to the Remote Object
Both the client and the server must agree on a common interface, which describes the methods that are to be invoked on the server. For example:
WeatherIntf.java
// An interface that describes the service we will be accessing remotely.
public interface WeatherIntf extends java.rmi.Remote {
public String getWeather() throws java.rmi.RemoteException;
}
2. The RMI Client
Here is the example code for the client. The call to Naming.lookup() returns a reference to a Remote object that is available on the server (localhost in this case) under the service name /WeatherServer. Before we can access its methods, the Remote object must be cast to the appropriate interface type (WeatherIntf in this case).
RMIdemo.java
import java.rmi.*;
public class RMIdemo {
public static void main(String[] args) {
try {
// Obtain a reference to an object that lives remotely on a server.
// The object is published under the service name WeatherServer and
// it is known to implement interface WeatherIntf. We cast to this
// interface in order to access the object’s methods.
Remote robj = Naming.lookup("//localhost/WeatherServer");
WeatherIntf weatherServer = (WeatherIntf)robj;
// Access the services provided by the remote object.
while (true) {
String forecast = weatherServer.getWeather();
System.out.println(“The weather will be " + forecast);
Thread.sleep(500);
}
}
catch (Exception e) {
System.out.println(e.getMessage());
}
}
}
3. The RMI Server
Here is the example code for the server. The server makes its services available to the client by registering them with the RMI registry using a call to Naming.rebind(). In this code, the server side object is made available under the name /WeatherServer.
WeatherServer.java
import java.rmi.*;
import java.rmi.server.UnicastRemoteObject;
public class WeatherServer extends UnicastRemoteObject implements WeatherIntf {
public WeatherServer() throws java.rmi.RemoteException {
super();
}
// The method that will be invoked by the client.
public String getWeather() throws RemoteException {
return Math.random() > 0.5 ? “sunny” : “rainy”;
}
public static void main(String[] args) {
// We need to set a security manager since this is a server.
// This will allow us to customize access priviledges to
// remote clients.
System.setSecurityManager(new RMISecurityManager());
try {
// Create a WeatherServer object and announce its service to the
// registry.
WeatherServer weatherServer = new WeatherServer();
Naming.rebind("/WeatherServer”, weatherServer);
}
catch (Exception e) {
System.out.println(e.getMessage());
}
}
}
-
Compile all three .java files using javac:
javac *.java -
Generate the stub and the skeleton classes for the server:
rmic WeatherServer -
Put the class files in a location that the JDK knows about e.g. the current directory or $JAVAHOME/jre/classes.
-
Start the RMI registry:
rmiregistry -
Create a permissions file for the server:
permitgrant {
permission java.net.SocketPermission “*”, “connect”;
permission java.net.SocketPermission “*”, “accept”;
// Here is how you could set file permissions:
// permission java.io.FilePermission “/tmp/*”, “read”;
}; -
Start the server using the security policy prescribed by the permissions file:
java -Djava.security.policy=permit WeatherServer -
Start the client:
java RMIdemo
The client will now communicate with the server to find out the current weather.
|
common_crawl_ocw.mit.edu_59
|
Topics
1. Member Access
(Ref. Lippman 13.1.3, 17.2, 18.3)
Types of Access Privilege
Member Access Under Inheritance
Key Points
- Private members are only accessible within the class that they are declared. They are not accessible by derived class definitions.
- Protected members are not accessible by objects. They are always accessible by a first level derived class.
- The inheritance access specifier places an upper limit on the access level of inherited members in the derived class.
2. A Linked List Class
Class Declaration
list.h
#ifndef _LIST_H_
#define _LIST_H_
#include <iostream.h>
#ifndef TRUE
#define TRUE 1
#endif // TRUE
#ifndef FALSE
#define FALSE 0
#endif // FALSE
// Generic list element. ListElement is an abstract class which will be
// subclassed by users of the List class in order to create different types
// of list elements.
class ListElement {
private:
ListElement *mpNext; // Pointer to next element in the list.
public:
ListElement() {mpNext = NULL;}
virtual ~ListElement() {}
// A pure virtual method which returns some measure of the element’s
// importance for purposes of ordering the list. The implementation
// will be provided by individual subclasses. The list will be ordered
// from most significant (at the head) to least significant.
virtual float ElementValue() = 0;
// A pure virtual method which prints out the contents of the list element.
// Implementation will be provided by individual subclasses.
virtual void print() = 0;
// Grant special access privilege to class list.
friend class List;
// An operator<< which prints out a list.
friend ostream& operator<<(ostream &os, const List& list);
};
// A linked list class.
class List {
private:
ListElement *mpHead; // Pointer to the first element in the list.
public:
// Create an empty list.
List();
// Destroy the list, including all of its elements.
~List();
// Add an element to the list. Returns TRUE if successful.
int AddElement(ListElement *pElement);
// Remove an element from the list. Returns TRUE if successful.
int RemoveElement(ListElement *pElement);
// Return a pointer to the largest element. Does not remove it from the list.
ListElement *GetLargest();
// Return a pointer to the smallest element. Does not remove it from the list.
ListElement *GetSmallest();
// An operator<< which prints out the entire list.
friend ostream& operator<<(ostream &os, const List& list);
};
#endif // _LIST_H_
Class Definition
list.C
#include “list.h”
// Create an empty list.
List::List() {
mpHead = NULL;
}
// Destroy the list, including all of its elements.
List::~List() {
ListElement *pCurrent, *pNext;
for (pCurrent = mpHead; pCurrent ! = NULL; pCurrent = pNext) {
pNext = pCurrent->mpNext;
delete pCurrent;
}
}
// Add an element to the list. Returns TRUE if successful.
int List::AddElement(ListElement *pElement) {
ListElement *pCurrent, *pPrevious;
float fValue = pElement->ElementValue();
pPrevious = mpHead;
for (pCurrent = mpHead; pCurrent != NULL; pCurrent = pCurrent->mpNext) {
if (fValue > pCurrent->ElementValue()) {
// Insert the new element before the current element.
pElement->mpNext = pCurrent;
if (pCurrent == mpHead)
mpHead = pElement;
else
pPrevious->mpNext = pElement;
return TRUE;
}
pPrevious = pCurrent;
}
// We have reached the end of the list.
if (mpHead == NULL)
mpHead = pElement;
else
pPrevious->mpNext = pElement;
pElement->mpNext = NULL;
return TRUE;
}
// Remove an element from the list. Returns TRUE if successful.
int List::RemoveElement(ListElement *pElement) {
ListElement *pCurrent, *pPrevious;
pPrevious = mpHead;
for (pCurrent = mpHead; pCurrent != NULL; pCurrent = pCurrent->mpNext) {
if (pCurrent == pElement) {
if (pCurrent == mpHead)
mpHead = pCurrent->mpNext;
else
pPrevious->mpNext = pCurrent->mpNext;
delete pCurrent;
return TRUE;
}
pPrevious = pCurrent;
}
// The given element was not found in the list.
return FALSE;
}
// Return a pointer to the largest element. Does not remove it from the list.
ListElement *List::GetLargest() {
return mpHead;
}
// Return a pointer to the smallest element. Does not remove it from the list.
ListElement *List::GetSmallest() {
ListElement *pCurrent, *pPrevious;
pPrevious = mpHead;
for (pCurrent = mpHead; pCurrent != NULL; pCurrent = pCurrent->mpNext) {
pPrevious = pCurrent;
}
return pPrevious;
}
// An operator<< which prints out the entire list.
ostream& operator<<(ostream &os, const List& list) {
ListElement *pCurrent;
for (pCurrent = list.mpHead; pCurrent ! = NULL;
pCurrent = pCurrent->mpNext) {
// Print out the contents of the current list element. Since the
// print method is declared to be virtual in the ListElement class,
// the actual print method to be used will be determined at run time.
pCurrent->print();
}
}
Using the Linked List Class
shapes.h
// Some shapes that we may wish to store in a linked list.
// We will order the shape objects according to their areas.
#ifndef _SHAPE_H_
#define _SHAPE_H_
#define PI 3.14159
#include “list.h”
class Triangle : public ListElement {
private:
float mfBase, mfHeight;
public:
// Unless we provide an explicit base class initializer, the base
// class will be initialized using its default constructor.
Triangle() {mfBase = mfHeight = 0.0;}
Triangle(float fBase, float fHeight) {mfBase = fBase; mfHeight = fHeight;}
~Triangle() {}
float ElementValue() {return (mfBase * mfHeight / 2);}
void print() {cout << “Triangle: area = " << ElementValue() << endl;}
};
class Rectangle : public ListElement {
private:
float mfBase, mfHeight;
public:
// Unless we provide an explicit base class initializer, the base
// class will be initialized using its default constructor.
Rectangle() {mfBase = mfHeight = 0.0;}
Rectangle(float fBase, float fHeight) {mfBase = fBase; mfHeight = fHeight;}
~Rectangle() {}
float ElementValue() {return (mfBase * mfHeight);}
void print() {cout << “Rectangle: area = " << ElementValue() << endl;}
};
class Circle : public ListElement {
private:
float mfRadius;
public:
// Unless we provide an explicit base class initializer, the base
// class will be initialized using its default constructor.
Circle() {mfRadius = 0.0;}
Circle(float fRadius) {mfRadius = fRadius;}
~Circle() {}
float ElementValue() {return (PI * mfRadius * mfRadius);}
void print() {cout << “Circle: area = " << ElementValue() << endl;}
};
#endif // _SHAPE_H_
list_test.C
#include “shapes.h”
main() {
List list;
ListElement *p;
p = new Triangle(4, 3);
list.AddElement(p);
p = new Rectangle(2, 1);
list.AddElement(p);
p = new Circle(2);
list.AddElement(p);
p = new Triangle(3, 2);
list.AddElement(p);
p = new Circle(1);
list.AddElement(p);
cout << list << endl;
list.RemoveElement(list.GetLargest());
cout << list << endl;
list.RemoveElement(list.GetSmallest());
cout << list << endl;
}
|
common_crawl_ocw.mit.edu_60
|
How to Use make
Introduction
make is a command generator which generates a sequence of commands for execution by the UNIX® shell. These commands usually relate to the maintenance of a set of files in a software development project. We will use make to help us organize our C++ and C source code files during the compilation and linking process. In particular, make can be used to sort out the dependency relations among the various source files, object files and executables and to determine exactly how the object files and executables will be produced.
Invoking make from the Command Line
make may be invoked from the command line by typing:
make -f make filename program
Here, program is the name of the target i.e. the program to be made. makefilename is a description file which tells the make utility how to build the target program from its various components. Each of these components could be a target in itself. make would therefore have to build these targets, using information in the description file, before program can be made. program need not necessarily be the highest level target in the hierarchy, although in practice it often is.
It is not always necessary to specify the name of the description file when invoking make. For example,
make program
would cause make to look in the current directory for a default description file named makefile or Makefile, in that order.
Furthermore, it is not even necessary to specify the name of the final target. Simply typing
make
will build the first target found in the default description file, together with all of its components. On the other hand, it is also possible to specify multiple targets when invoking make.
make Description Files (makefiles)
Here is an example of a simple makefile:
program: main.o iodat.o
cc -o program main.o iodat.o
main.o: main.c
cc -c main.c
iodat.o: iodat.c
cc -c iodat.c
Each entry consists of a dependency line containing a colon, and one or more command lines each starting with a tab. Dependency lines have one or more targets to the left of the colon. To the right of the colon are the component files on which the target(s) depend.
A command line will be executed if any target listed on the dependency line does not exist, or if any of the component files are more recent than a target.
Here are some points to remember:
- Comments start with a pound sign (#).
- Continuation of a line is denoted by a backslash (\).
- Lines containing equals signs (=) are macro definitions (see next section).
- Each command line is typically executed in a separate Bourne shell i.e. _sh_1.
To execute more than one command line in the same shell, type them on the same line, separated by semicolons. Use a \ to continue the line if necessary. For example,
program: main.o iodat.o
cd newdir; \
cc -o program main.o iodat.o
would change to the directory newdir before invoking cc. (Note that executing the two commands in separate shells would not produce the required effect, since the cd command is only effective within the shell from which it was invoked.)
The Bourne shell’s pattern matching characters maybe used in command lines, as well as to the right of the colon in dependency lines e.g.
program: *.c
cc -o program *.c
Macros
Macro Definitions in the Description File
Macro definitions are of the form:
name = string
Subsequent references to $(name) or ${name} are then interpreted as string. Macros are typically grouped together at the beginning of the description file. Macros which have no string to the right of the equals sign are assigned the null string. Macros may be included within macro denitions, regardless of the order in which they are defined.
Here is an example of a macro:
_CC = /mit/gnu/arch/sun4x 57/bin/g++
program: program.C
${CC} -o program program.C
_
Shell Environment Variables
Shell variables that were part of the environment before make was invoked are available as macros within make. Within a make description file, however, shell environment variables must be surrounded by parentheses or braces, unless they consist of a single character. For example, ${PWD} may be used in a description file to refer to the current working directory.
Command Line Macro Definitions
Macros can be defined when invoking make e.g.
make program CC=/mit/gnu/arch/sun4x_57/bin/g++
Internal Macros
make has a few predefined macros:
- $? evaluates to the list of components that are younger than the current target. Can only be used in description file command lines.
- $@ evaluates to the current target name. Can only be used in description file command lines.
- $$@ also evaluates to the current target name. However, it can only be used on dependency lines.
Example
PROGS = prog1 prog2 prog3
${PROGS}: $$@.c
cc -o $@ $?
This will compile the three files prog1.c, prog2.c and prog3.c, unless any of them have already been compiled. During the compilation process, each of the programs becomes the current target in turn. In this particular example, the same effect would be obtained if we replaced the $? by $@.c
Order of Priority of Macro Assignments
The following is the order of priority of macro assignments, from least to greatest:
- Internal (default) macro definitions.
- Shell environment variables.
- Description file macro definitions.
- Command line macro definitions.
Items 2. and 3. can be interchanged by specifying the -e option to make.
Macro String Substitution
String substitutions may be performed on all macros used in description file shell commands. However, substitutions occur only at the end of the macro, or immediately before white space. The following example illustrates this:
LETTERS = abcxyz xyzabc xyz
print:
echo $(LETTERS:xyz=def)
This description file will produce the output
abcdef xyzabc def
Suffix Rules
The existence of naming and compiling conventions makes it possible to considerably simplify description files. For example, the C compiler requires that C source files always have a .c suffix. Such naming conventions enable make to perform many tasks based on suffix rules. make provides a set of default suffix rules. In addition, new suffix rules can be defined by the user.
For example, the description file on page 2 can be simplified to
program: main.o iodat.o
cc -o program main.o iodat.o
make will use the following default macros and suffix rules to determine how to build the components main.o and iodat.o.
CC = cc
CFLAGS = -O
.SUFFIXES: .o .c
.c.o:
${CC} ${CFLAGS} $<
The entries on the .SUFFIXES line represent the suffixes which make will consider significant. Thus, in building iodat.o from the above description file, make looks for a user-specified dependency line containing iodat.o as a target. Finding no such dependency, make notes that the .o suffix is significant and therefore it looks for another file in the current directory which can be used to make iodat.o. Such a file must
-
have the same name (apart from the suffix) as iodat.o.
-
have a significant suffix.
-
be able to be used to make iodat.o according to an existing suffix rule.
make then applies the above suffix rule which specifies how to build a .o file from a .c file. The $< macro evaluates to the component that triggered the suffix rule i.e. iodat.c.
After the components main.o and iodat.o have been updated in this way (if necessary), the target program will be built according to the directions in the description file.
Internal Macros in Suffix Rules
The following internal macros can only be used in suffix rules.
-
$< evaluates to the component that is being used to make the target.
-
$* evaluates to the filename part (without any suffix) of the component that is being used to make the target.
Note that the $? macro cannot occur in suffix rules. The $@ macro, however, can be used.
Null Suffixes
Files with null suffixes (no suffix at all) can be made using a suffix rule which has only a single suffix e.g.
.c:
${CC} -o $@ $<
This suffix rule will be invoked to produce an executable called program from a source file program.c, if the description file contains a line of the form.
program:
Note that in this particular situation it would be incorrect to specify that program depended on program.c, because make would then consider the command line to contain a null command and would therefore not invoke the suffix rule. This problem does not arise when building a .o file from a .c file using suffix rules. A .o file can be specified to depend on a .c file (and possibly some additional header files) because of the one-to-one relationship that exists betweeen .o and .c files.
Writing Your Own Suffix Rules
Suffix rules and the list of significant suffixes can be redefined. A line containing .SUFFIXES by itself will delete the current list of significant suffixes e.g.
.SUFFIXES:
.SUFFIXES: .o .c
.c.o:
${CC} -o $@ $<
References
[1] Talbot, S. “Managing Projects with Make.” O’Reilly & Associates, Inc.
|
common_crawl_ocw.mit.edu_61
|
Topics
1. Threads, Processes and Multitasking
Multitasking is the ability of a computer’s operating system to run several programs (or processes) concurrently on a single CPU. This is done by switching from one program to another fast enough to create the appearance that all programs are executing simultaneously. There are two types of multitasking:
Preemptive multitasking. In preemptive multitasking, the operating system decides how to allocate CPU time slices to each program. At the end of a time slice, the currently active program is forced to yield control to the operating system, whether it wants to or not. Examples of operating systems that support premptive multitasking are Unix®, Windows® 95/98, Windows® NT and the planned release of Mac® OS X.
Cooperative multitasking. In cooperative multitasking, each program controls how much CPU time it needs. This means that a program must cooperate in yielding control to other programs, or else it will hog the CPU. Examples of operating systems that support cooperative multitasking are Windows® 3.1 and Mac® OS 8.5.
Multithreading extends the concept of multitasking by allowing individual programs to perform several tasks concurrently. Each task is referred to as a thread and it represents a separate flow of control. Multithreading can be very useful in practical applications. For example, if a web page is taking too long to load in a web browser, the user should be able interrupt the loading of the page by clicking on the stop button. The user interface can be kept responsive to the user by using a separate thread for the network activity needed to load the page.
What then is the difference then between a process and a thread? The answer is that each process has its own set of variables, whereas threads share the same data and system resources. A multithreaded program must therefore be very careful about the way that threads access and modify data, or else unpredictable behavior may occur.
2. How to Create Threads
(Ref. Java® Tutorial)
We can create a new thread using the Thread class provided in the java.lang package. There are two ways to use the Thread class.
- By creating a subclass of Thread.
- By writing a class that implements the Runnable interface.
Subclassing the Thread class
In this approach, we create a subclass of the Thread class. The Thread class has a method named run(), which we can override in our subclass. Our implementation of the run() method must contain all code that is to be executed within the thread.
class MyClass extends Thread {
// …
public void run() {
// All code to be executed within the thread goes here.
}
}
We can create a new thread by instantiating our class, and we run it by calling the start() method that we inherited from class Thread.
MyClass a = new MyClass();
a.start();
This approach for creating a thread works fine from a technical standpoint. Conceptually, however, it does not make that much sense to say that MyClass “is a” Thread. All that we are really interested in doing is to provide a run() method that the Thread class can execute. The next approach is geared to do exactly this.
Implementing the Runnable Interface
In this approach, we write a class that implements the Runnable interface. The Runnable interface requires us to implement a single method named run(), within which we place all code that is to be executed within the thread.
class MyClass implements Runnable {
// …
public void run() {
// All code to be executed within the thread goes here.
}
}
We can create a new thread by creating a Thread object from an object of type MyClass. We run the thread by calling the Thread object’s start() method.
MyClass a = new MyClass;
Thread t = new Thread(a);
t.start();
3. The LifeCycle of a Thread
(Ref. Java® Tutorial)
A thread can be in one of four states during its lifetime:
-
new - A new thread is one that has been created (using the new operator), but has not yet been started.
-
runnable - A thread becomes runnable once its start() method has been invoked. This means that the code in the run() method can execute whenever the thread receives CPU time from the operating system.
-
blocked - A thread can become blocked if one of the following events occurs:
- The thread’s sleep() method is invoked. In this case, the thread remains blocked until the specified number of milliseconds elapses.
- The thread calls the wait() method of an object. In this case, the thread remains blocked until either the object’s notify() method or its notifyAll() method is called from another thread. The calls to wait(), notify() and notifyAll() are typically found within synchronized methods of the object.
- The thread has blocked on an input/output operation. In this case, the thread remains blocked until the i/o operation has completed.
-
dead - A thread typically dies when the run() method has finished executing.
Note: The following methods in the java.lang.Thread class should no longer be used, since they can lead to unpredicable behavior: stop(), suspend() and resume().
The following example illustrates various thread states. The main thread in our program creates a new thread, Thread-0. It then starts Thread-0, thereby making Thread-0 runnable so that it prints out an integer every 500 milliseconds. We call the sleep() method to enforce the 500 millisecond delay between printing two consecutive integers. In the meantime, the main thread proceeds to print out an integer every second only. The output from the program shows that the two threads are running in parallel. When the main thread finishes its for loop, it stops Thread-0.
We maintain a variable, myThread, which initially references Thread-0. This variable is polled by the run() method to make sure that it is still referencing Thread-0. All we have to do to stop the thread is to set myThread to null. This will cause the run() method to terminate normally.
class MyClass implements Runnable {
int i;
Thread myThread;
public MyClass() {
i = 0;
}
// This will terminate the run() method.
public void stop() {
myThread = null;
}
// The run() method simply prints out a sequence of integers, one every half second.
public void run() {
// Get a handle on the thread that we are running in.
myThread = Thread.currentThread();
// Keep going as long as myThread is the same as the current thread.
while (Thread.currentThread() == myThread) {
System.out.println(Thread.currentThread().getName() + “: " + i);
i++;
try {
Thread.sleep(500); // Tell the thread to sleep for half a second.
}
catch (InterruptedException e) {}
}
}
}
class Threadtest {
public static void main(String[] args) {
MyClass a = new MyClass();
Thread t = new Thread(a);
// Start another thread. This thread will run in parallel to the main thread.
System.out.println(Thread.currentThread().getName() + “: Starting a separate thread”);
t.start();
// The main thread proceeds to print out a sequence of integers of its own, one every second.
for (int i = 0; i < 6; i++) {
System.out.println(Thread.currentThread().getName() + “: " + i);
// Tell the main thread to sleep for a second.
try {
Thread.sleep(1000);
}
catch (InterruptedException e) {}
}
// Stop the parallel thread. We do this by setting myThread to null in our runnable object.
System.out.println(Thread.currentThread().getName() + “: Stopping the thread”);
a.stop();
}
}
4. Animations
Here is an example of a simple animation. We have used a separate thread to control the motion of a ball on the screen.
anim.html
<HTML>
<BODY>
<APPLET CODE=“Animation.class” WIDTH=300 HEIGHT=400>
</APPLET>
</BODY>
Animation.java
import java.awt.*;
import java.awt.event.*;
import javax.swing.*;
public class Animation extends JApplet implements Runnable, ActionListener {
int miFrameNumber = -1;
int miTimeStep;
Thread mAnimationThread;
boolean mbIsPaused = false;
Button mButton;
Point mCenter;
int miRadius;
int miDX, miDY;
public void init() {
// Make the animation run at 20 frames per second. We do this by
// setting the timestep to 50ms.
miTimeStep = 50;
// Initialize the parameters of the circle.
mCenter = new Point(getSize().width/2, getSize().height/2);
miRadius = 15;
miDX = 4; // X offset per timestep.
miDY = 3; // Y offset per timestep.
// Create a button to start and stop the animation.
mButton = new Button(“Stop”);
getContentPane().add(mButton, “North”);
mButton.addActionListener(this);
// Create a JPanel subclass and add it to the JApplet. All drawing
// will be done here, do we must write the paintComponent() method.
// Note that the anonymous class has access to the private data of
// class Animation, because it is defined locally.
getContentPane().add(new JPanel() {
public void paintComponent(Graphics g) {
// Paint the background.
super.paintComponent(g);
// Display the frame number.
g.drawString(“Frame " + miFrameNumber, getSize().width/2 - 40,
getSize().height - 15);
// Draw the circle.
g.setColor(Color.red);
g.fillOval(mCenter.x-miRadius, mCenter.y-miRadius, 2*miRadius,
2*miRadius);
}
}, “Center”);
}
public void start() {
if (mbIsPaused) {
// Don’t do anything. The animation has been paused.
} else {
// Start animating.
if (mAnimationThread == null) {
mAnimationThread = new Thread(this);
}
mAnimationThread.start();
}
}
public void stop() {
// Stop the animating thread by setting the mAnimationThread variable
// to null. This will cause the thread to break out of the while loop,
// so that the run() method terminates naturally.
mAnimationThread = null;
}
public void actionPerformed(ActionEvent e) {
if (mbIsPaused) {
mbIsPaused = false;
mButton.setLabel(“Stop”);
start();
} else {
mbIsPaused = true;
mButton.setLabel(“Start”);
stop();
}
}
public void run() {
// Just to be nice, lower this thread’s priority so it can’t
// interfere with other processing going on.
Thread.currentThread().setPriority(Thread.MIN_PRIORITY);
// Remember the starting time.
long startTime = System.currentTimeMillis();
// Remember which thread we are.
Thread currentThread = Thread.currentThread();
// This is the animation loop.
while (currentThread == mAnimationThread) {
// Advance the animation frame.
miFrameNumber++;
// Update the position of the circle.
move();
// Draw the next frame.
repaint();
// Delay depending on how far we are behind.
try {
startTime += miTimeStep;
Thread.sleep(Math.max(0,
startTime-System.currentTimeMillis()));
}
catch (InterruptedException e) {
break;
}
}
}
// Update the position of the circle.
void move() {
mCenter.x += miDX;
if (mCenter.x - miRadius < 0 ||
mCenter.x + miRadius > getSize().width) {
miDX = -miDX;
mCenter.x += 2*miDX;
}
mCenter.y += miDY;
if (mCenter.y - miRadius < 0 ||
mCenter.y + miRadius > getSize().height) {
miDY = -miDY;
mCenter.y += 2*miDY;
}
}
}
|
common_crawl_ocw.mit.edu_62
|
Contents
- Local and Global Variables
- Reference Types
- Functions in C++
- Basic Input and Output
- Creating and Destroying Objects - Constructors and Destructors
1. Local and Global Variables
(Ref. Lippman 8.1-8.3)
Local variables are objects that are only accessible within a single function (or a sub-block within a function.) Global variables, on the other hand, are objects that are generally accessible to every function in a program. It is possible, though potentially confusing, for a local object and a global object to share the same name. In following example, the local object x shadows the object x in the global namespace. We must therefore use the global scope operator, ::, to access the global object.
main_file.C
float x; // A global object.
int main () {
float x; // A local object with the same name.
x = 5.0; // This refers to the local object.
::x = 7.0; // This refers to the global object.
}
What happens if we need to access the global object in another file? The object has already been defined in main_file.C, so we should not set aside new memory for it. We can inform the compiler of the existence of the global object using the extern keyword.
another_file.C
extern float x; // Declares the existence of a global object external to this file.
void do_something() {
x = 3; // Refers to the global object defined in main_file.C.
}
2. Reference Types
(Ref. Lippman 3.6)
Reference types are a convenient alternative way to use the functionality that pointers provide. A reference is just a nickname for existing storage. The following example defines an integer object, i, and then it defines a reference variable, r, by the statement
int& r = i;
Be careful not to confuse this use of & with the address of operator. Also note that, unlike a pointer, a reference must be initialized at the time it is defined.
#include <stdio.h>
int main() {
int i = 0;
int& r = i; // Create a reference to i.
i++;
printf(“r = %d\n”, r);
}
3. Functions in C++
Argument Passing
(Ref. Lippman 7.3)
Arguments can be passed to functions in two ways. These techniques are known as
Pass by value.
Pass by reference.
When an argument is passed by value, the function gets its own local copy of the object that was passed in. On the other hand, when an argument is passed by reference, the function simply refers to the object in the calling program.
// Pass by value.
void increment (int i) {
i++; // Modifies a local variable.
}
// Pass by reference.
void decrement (int& i) {
i–; // Modifies storage in the calling function.
}
#include <stdio.h>
int main () {
int k = 0;
increment(k); // This has no effect on k.
decrement(k); // This will modify k.
printf("%d\n", k);
}
Passing a large object by reference can improve efficiency since it avoids the overhead of creating an extra copy. However, it is important to understand the potentially undesirable side effects that can occur. If we want to protect against modifying objects in the calling program, we can pass the argument as a constant reference:
// Pass by reference.
void decrement (const int& i) {
i–; // This statement is now illegal.
}
Return by Reference
(Ref. Lippman 7.4)
A function may return a reference to an object, as long as the object is not local to the function. We may decide to return an object by reference for efficiency reasons (to avoid creating an extra copy). Returning by reference also allows us to have function calls that appear on the left hand side of an assignment statement. In the following contrived example, select_month() is used to pick out the month member of the object today and set its value to 9.
s_truct date {_
int day;
int month;
int year;
};
int& select_month(struct date &d) {
return d.month;
}
#include <stdio.h>
int main() {
struct date today;
select_month(today) = 9; // This is equivalent to: today.month = 9;
printf("%d\n", today.month);
}
Default Arguments
(Ref. Lippman 7.3.5)
C++ allows us to specify default values for function arguments. Arguments with default values must all appear at the end of the argument list. In the following example, the third argument of move() has a default value of zero.
void move(int dx, int dy, int dz = 0) {
// Move some object in 3D space. If dz = 0, then move the object in 2D space.
}
int main() {
move(2, 3, 5);
move(2, 3); // dz assumes the default value, 0.
}
Function Overloading
(Ref. Lippman 9.1)
In C++, two functions can share the same name as long as their signatures are different. The signature of a function is another name for its parameter list. Function overloading is useful when two or more functionally similar tasks need to be implemented in different ways. For example:
void draw(double center, double radius) {
// Draw a circle.
}
void draw(int left, int top, int right, int bottom) {
// Draw a rectangle.
}
int main() {
draw(0, 5); // This will draw a circle.
draw(0, 4, 6, 8); // This will draw a rectangle.
}
Inline Functions
(Ref. Lippman 3.15, 7.6)
Every function call involves some overhead. If a small function has to be called a large number of times, the relative overhead can be high. In such instances, it makes sense to ask the compiler to expand the function inline. In the following example, we have used the inline keyword to make swap() an inline function.
inline void swap(int& a, int& b) {
int tmp = a;
a = b;
b = tmp;
}
#include <stdio.h>
main() {
int i = 2, j = 3;
swap(i, j);
printf(“i = %d j = %d\n”, i, j);
}
This code will be expanded as
main() {
int i = 2, j = 3;
int tmp = i;
i = j;
j = tmp;
printf(“i = %d j = %d\n”, i, j);
}
Whenever the compiler needs to expand a call to an inline function, it needs to know the function definition. For this reason, inline functions are usually placed in a header file that can be included where necessary. Note that the inline specification is only a recommendation to the compiler, which the compiler may choose to ignore. For example, a recursive function cannot be completely expanded inline.
4. Basic Input and Output
(Ref. Lippman 1.5)
C++ provides three predefined objects for basic input and output operations: cin, cout and cerr. All three objects can be accessed by including the header file iostream.h.
Reading from Standard Input: cin
cin is an object of type istream that allows us to read in a stream of data from standard input. It is functionally equivalent to the scanf() function in C. The following example shows how cin is used in conjunction with the >> operator. Note that the >> points towards the object into which we are reading data.
#include <iostream.h> // Provides access to cin and cout.
#include <stdio.h> /* Provides access to printf and scanf. */
int main() {
int i;
cin » i; // Uses the stream input object, cin, to read data into i.
scanf("%d", &i); /* Equivalent C-style statement. */
float a;
cin » i » a; // Reads multiple values from standard input.
scanf("%d%f", &i, &a); /* Equivalent C-style statement. */
}
Writing to Standard Output: cout
cout is an object of type ostream that allows us to write out a stream of data to standard output. It is functionally equivalent to the printf() function in C. The following example shows how cout is used in conjunction with the << operator. Note that the << points away from the object from which we are writing out data.
#include <iostream.h> // Provides access to cin and cout.
#include <stdio.h> /* Provides access to printf and scanf. */
int main() {
cout << “Hello World!\n”; // Uses the stream output object, cout, to print out a string.
printf(“Hello World!\n”); /* Equivalent C-style statement. */
int i = 7;
cout << “i = " << i << endl; // Sends multiple objects to standard output.
printf(“i = %d\n”, i); /* Equivalent C-style statement. */
}
Writing to Standard Error: cerr
cerr is also an object of type ostream. It is provided for the purpose of writing out warning and error messages to standard error. The usage of cerr is identical to that of cout. Why then should we bother with cerr? The reason is that it makes it easier to filter out warning and error messages from real data. For example, suppose that we compile the following program into an executable named foo:
#include <iostream.h>
int main() {
int i = 7;
cout << i << endl; // This is real data.
cerr << “A warning message” << endl; // This is a warning.
}
We could separate the data from the warning by redirecting the standard output to a file, while allowing the standard error to be printed on our console.
athena% foo > temp
A warning message
athena% cat temp
7
5. Creating and Destroying Objects - Constructors and Destructors
(Ref. Lippman 14.1-14.3)
Let’s take a closer look at how constructors and destructors work.
A Point Class
Here is a complete example of a Point class. We have organized the code into three separate files:
point.h contains the declaration of the class, which describes the structure of a Point object.
point.C contains the definition of the class i.e. the actual implementation of the methods.
point_test.C is a program that uses the Point class.
Our Point class has three constructors and one destructor.
Point(); // The default constructor.
Point(float fX, float fY); // A constructor that takes two floats.
Point(const Point& p); // The copy constructor.
~Point(); // The destructor.
These constructors can be respectively invoked by object definitions such as
Point a;
Point b(1.0, 2.0);
Point c(b);
The default constructor, Point(), is so named because it can be invoked without any arguments. In our example, the default constructor initializes the Point to (0,0). The second constructor creates a Point from a pair of coordinates of type float. Note that we could combine these two constructors into a single constructor which has default arguments:
Point(float fX=0.0, float fY=0.0);
The third constructor is known as a copy constructor since it creates one Point from another. The object that we want to clone is passed in as a constant reference. Note that we cannot pass by value in this instance because doing so would lead to an unterminated recursive call to the copy constructor. In this example, the destructor does not have to perform any clean-up operations. Later on, we will see examples where the destructor has to release dynamically allocated memory.
Constructors and destructors can be triggered more often than you may imagine. For example, each time a Point is passed to a function by value, a local copy of the object is created. Likewise, each time a Point is returned by value, a temporary copy of the object is created in the calling program. In both cases, we will see an extra call to the copy constructor, and an extra call to the destructor. You are encouraged to put print statements in every constructor and in the destructor, and then carefully observe what happens.
point.h
// Declaration of class Point.
#ifndef _POINT_H_
#define _POINT_H_
#include <iostream.h>
class Point {
// The state of a Point object. Property variables are typically
// set up as private data members, which are read from and
// written to via public access methods.
private:
float mfX;
float mfY;
// The behavior of a Point object.
public:
Point(); // The default constructor.
Point(float fX, float fY); // A constructor that takes two floats.
Point(const Point& p); // The copy constructor.
~Point(); // The destructor.
void print() { // This function will be made inline by default.
cout << “(” << mfX << “,” << mfY << “)” << endl;
}
void set_x(float fX);
float get_x();
void set_y(float fX);
float get_y();
};
#endif // _POINT_H_
point.C
// Definition class Point.
#include “point.h”
// A constructor which creates a Point object at (0,0).
Point::Point() {
cout << “In constructor Point::Point()” << endl;
mfX = 0.0;
mfY = 0.0;
}
// A constructor which creates a Point object from two
// floats.
Point::Point(float fX, float fY) {
cout << “In constructor Point::Point(float fX, float fY)” << endl;
mfX = fX;
mfY = fY;
}
// A constructor which creates a Point object from
// another Point object.
Point::Point(const Point& p) {
cout << “In constructor Point::Point(const Point& p)” << endl;
mfX = p.mfX;
mfY = p.mfY;
}
// The destructor.
Point::~Point() {
cout << “In destructor Point::~Point()” << endl;
}
// Modifier for x coordinate.
void Point::set_x(float fX) {
mfX = fX;
}
// Accessor for x coordinate.
float Point::get_x() {
return mfX;
}
// Modifier for y coordinate.
void Point::set_y(float fY) {
mfY = fY;
}
// Accessor for y coordinate.
float Point::get_y() {
return mfY;
}
point_test.C
// Test program for the Point class.
#include “point.h”
void main() {
Point a;
Point b(1.0, 2.0);
Point c(b);
// Print out the current state of all objects.
a.print();
b.print();
c.print();
b.set_x(3.0);
b.set_y(4.0);
// Print out the current state of b.
cout << endl;
b.print();
}
|
common_crawl_ocw.mit.edu_63
|
Operator Overloading
(Ref. Lippman 15.1-15.7, 15.9)
We have already seen how functions can be overloaded in C++. We can also overload operators, such as the + operator, for classes that we write. Note that operators for built-in types may not be created or modified. A complete list of overloadable operators can be found in Lippman, Table 15.1.
A Complex Number Class
In this example, we have overloaded the following operators: +, *, >, =, (), [] and the cast operator.
operator+() and operator*()
Let a and b be of type Complex. The + operator in the expression a+b, can be interpreted as
a.operator+(b)
where operator+() is a member function of class Complex. We then have to write this function so that it adds the real and imaginary parts of a and b and returns the result as a new Complex object. Note that the function returns by value, since it has to create a temporary object for the result of the addition.
Our implementation will also work for an expression like a+b+c. In this case, the operator will be invoked in the following order
(a.operator+(b)).operator+(c)
The member operator+() will also work for an expression like
a + 7.0
because we have a constructor that can create a Complex from a single double. However, the expression
7.0 + a
will not work, because operator+() is a member of class Complex, and not the built-in double type.
To solve this problem, we can make the operator a global function, as we have done with operator*(). The expression 7.0 * a will be interpreted as
operator*(7.0, a)
Since a global function does not automatically have access to the private data of class Complex, we can grant special access to operator*() by making it a friend function of class Complex. In general, friend functions and classes should be used sparingly, since they are a violation of the rule of encapsulation.
operator>()
We have overloaded this operator to allow comparison of two Complex numbers, based on their magnitudes.
operator=()
The = operator is designed to work with statements like
a = b;
a = b = c;
These statements respectively interpreted as
a.operator=(b);
a.operator=(b.operator=(c));
In this case, the operator changes the object that invokes it, and it returns a reference to this object so that the second statement will work. The keyword this gives us a pointer to the current object within the class definition.
operator Point()
This operator allows us to convert a Complex object to a Point object. It can be used in an explicit cast, like
Complex a;
Point p;
p = (Point)a;
but it could also be invoked implicitly, as in
p = a;
Hence, a great deal of caution should be used when providing user-defined type conversions in this way. An alternative way to convert from a Complex to a Point is to give the Point class a constructor that takes a Complex as an argument. However, we might not have access to the source code for the Point class, if it were written by someone else.
Note that overloaded cast operators do not have a return type.
operator()()
This is the function call operator. It is invoked by a Complex object a as
a()
Here we have overloaded operator()() to return true if the object has an imaginary part and false otherwise.
operator[]()
The overloaded subscript operator is useful if we wish to access fields of the object like array elements. In our example, we have made a[0] refer to the real part and a[1] refer to the imaginary part of the object.
operator<<()
The output operator cannot be overloaded as a member function because we do not have access to the ostream class to which the predefined object cout belongs. Instead, operator<<() is overloaded as a global function. We must return a reference to the ostream class so that the output operator can be concatenated. For example,
cout << a << b;
will be invoked as
operator<<((operator<<(cout, a),b)
complex.h
// Interface for class Complex.
#ifndef _COMPLEX_H_
#define _COMPLEX_H_
#include <iostream.h>
#include “point.h”
#ifndef DEBUG_PRINT
#ifdef _DEBUG
#define DEBUG_PRINT(str) cout << str << endl;
#else
#define DEBUG_PRINT(str)
#endif
#endif
class Complex {
private:
double mdReal;
double mdImag;
public:
// This combines three constructors in one. Here we have used an initialization list to initialize
// the private data, instead of using assignment statements in the body of the constructor.
Complex(double dReal=0.0, double dImag=0.0) : mdReal(dReal), mdImag(dImag) {
DEBUG_PRINT(“In Complex::Complex(double dReal, double dImag)”)
}
// The copy constructor.
Complex(const Complex& c);
~Complex() {
DEBUG_PRINT(“In Complex::~Complex()”)
}
void print();
// Overloaded member operators.
Complex operator+(const Complex& c) const; // Overloaded + operator.
int operator>(const Complex& c) const; // Overloaded > operator.
Complex& operator=(const Complex& c); // Overloaded = operator.
operator Point() const; // Overloaded cast-to-Point operator.
bool operator()(void) const; // Overloaded function call operator.
double& operator[](int i); // Overloaded subscript operator.
// Overloaded global operators. We make these operators friends of class
// Complex, so that they will have direct access to the private data.
friend ostream& operator<<(ostream& os, const Complex& c); // Overloaded output operator.
friend Complex operator*(const Complex& c, const Complex& d); // Overloaded * operator.
};
#endif // _COMPLEX_H_
complex.C
// Implementation for class Complex.
#include “complex.h”
#include <stdlib.h>
// Definition of copy constructor.
Complex::Complex(const Complex& c) {
DEBUG_PRINT(“In Complex::Complex(const Complex& c)”)
mdReal = c.mdReal;
mdImag = c.mdImag;
}
// Definition of overloaded + operator.
Complex Complex::operator+(const Complex& c) const {
DEBUG_PRINT(“In Complex Complex::operator+(const Complex& c) const”)
return Complex(mdReal + c.mdReal, mdImag + c.mdImag);
}
// Definition of overloaded > operator.
int Complex::operator>(const Complex& c) const {
double sqr1 = mdReal * mdReal + mdImag * mdImag;
double sqr2 = c.mdReal * c.mdReal + c.mdImag * c.mdImag;
DEBUG_PRINT(“In int Complex::operator>(const Complex& c) const”)
return (sqr1 > sqr2);
}
// Definition of overloaded assignment operator.
Complex& Complex::operator=(const Complex& c) {
DEBUG_PRINT(“In Complex& Complex::operator=(const Complex& c)”)
mdReal = c.mdReal;
mdImag = c.mdImag;
return *this;
}
// Definition of overloaded cast-to-Point operator. This converts a Complex object to a Point object.
Complex::operator Point() const {
float fX, fY;
DEBUG_PRINT(“In Complex::operator Point() const”)
// Our Point class uses floats instead of doubles. In this case, we make a conscious decision
// to accept a loss in precision by converting the doubles to floats. Be careful when doing this!
fX = (float)mdReal;
fY = (float)mdImag;
return Point(fX, fY);
}
// Definition of overloaded function call operator. We have defined this operator to allow us to test
// whether a number is complex or real.
bool Complex::operator()(void) const {
DEBUG_PRINT(“In bool Complex::operator()(void) const”)
if (mdImag == 0.0)
return false; // Number is real.
else
return true; // Number is complex.
}
// Definition of overloaded subscript operator. We have defined this operator to allow access to
// the real and imaginary parts of the object.
double& Complex::operator[](int i) {
DEBUG_PRINT(“In double& Complex::operator[](int)”)
switch(i) {
case 0:
return mdReal;
case 1:
return mdImag;
default:
cerr << “Index out of bounds” << endl;
exit(0); // A function in the C standard library.
}
}
// Definition of a print function.
void Complex::print() {
cout << mdReal << " + j" << mdImag << endl;
}
// Definition of overloaded output operator. Note that this is a global function. We can
// access the private data of the Complex object c because the operator is a friend function.
ostream& operator<<(ostream& os, const Complex& c) {
DEBUG_PRINT(“In ostream& operator<<(ostream&, const Complex&)”)
cout << c.mdReal << " + j" << c.mdImag;
return os;
}
// Definition of overloaded * operator. By making this operator a global function, we can
// handle statements such as a = 7 * b, where a and b are Complex objects.
Complex operator*(const Complex& c, const Complex& d) {
DEBUG_PRINT(“In Complex operator*(const Complex& c, const Complex& d)”)
double dReal = c.mdReal*d.mdReal - c.mdImag*d.mdImag;
double dImag = c.mdReal*d.mdImag + c.mdImag*d.mdReal;
return Complex(dReal, dImag);
}
complex_test.C
#include “complex.h”
void main() {
Complex a;
Complex b;
Complex *c;
Complex d;
// Use of constructors and the overloaded operator=().
a = (Complex)2.0; // Same as a = Complex(2.0);
b = Complex(3.0, 4.0);
c = new Complex(5.0, 6.0);
// Use of the overloaded operator+().
d = a + b; // Same as d = a.operator+(b);
d.print();
d = a + b + *c; // Same as d = (a.operator+(b)).operator+(*c);
d.print();
// Use of the overloaded operator>().
if (b > a)
cout << “b > a” << endl;
else
cout << “b <= a” << endl;
// Use of cast-to-Point operator. This will convert a Complex object to a Point object.
// An alternative way to handle the type conversion is to give the Point class a constructor
// that takes a Complex object as an argument.
Point p;
p = (Point)b;
p.print();
// Use of the overloaded operator()().
if (a() == true)
cout << “a is a complex number” << endl;
else
cout << “a is a real number” << endl;
// Use of the overloaded operator[](). This will change the imaginary part of a.
a[1] = 8.0;
a.print();
// Use of the overloaded global operator<<().
cout << “a = " << a << endl;
// User of the overloaded global operator*(). The double literal constant will be passed as the
// first argument to operator*() and it will be converted to a Complex object using the Complex
// constructor. This statement would not be legal if the operator were a member function.
d = 7.0 * b;
cout << “d = " << d << endl;
}
|
common_crawl_ocw.mit.edu_64
|
Contents
- Object-Oriented Design vs Procedural Design
- The HelloWorld Procedure and the HelloWorld Object
- C++ Data Types
- Expressions
- Coding Style
1. Object-Oriented Design vs Procedural Design
Many of you will already be familiar with one or more procedural languages. Examples of such languages are FORTRAN 77, Pascal and C. In the procedural programming paradigm, one focuses on the decomposition of software into various functional components. In other words, the program is organized into a collection of functions, (also known as procedures or subroutines), which will be executed in a defined order to produce the desired result.
By contrast, object-based programming focuses on the organization of software into a collection of components, called objects, that group together
- Related items of data, known as properties.
- Operations that are to be performed on the data, which are known as methods.
In other words, an object is a model of a real world concept, which posesses both state and behavior.
Programming languages that allow us to create objects are said to support abstract data types. Examples of such languages are CLU, Ada, Modula-2.
The object-oriented programming paradigm goes a step beyond abstract data types by adding two new features: inheritance and polymorphism. We will talk about these ideas in depth later, but for now it will be sufficient to say that their purpose is to facilitate the management of objects that have similar characteristics. For example, squares, triangles and circles are all instances of shapes. Their common properties are that they all have a centroid and an area. Their common method might be that they all need to be displayed on the screen of a computer. Examples of languages that support the object-oriented paradigm are C++, Java® and Smalltalk.
2. The HelloWorld Procedure and the HelloWorld Object
Let’s take a look two simple programs that print out the string, Hello World!
The HelloWorld Procedure
Here is the procedural version of the program, written in C. The first statement is a preprocessor directive that tells the compiler to include the contents of the header file stdio.h. We include this file because it declares the existence of the built-in function, printf(). Every C program must have a top-level function named main(), which provides the entry point to the program.
#include <stdio.h>
/* The HelloWorld procedure definition. */
void HelloWorld() {
printf(“Hello World!\n”);
}
/* The main program. */
int main() {
HelloWorld(); /* Execute the HelloWorld procedure. */
return 0; /* Indicates successful completion of the program. */
}
The HelloWorld Object
Here is the object-based version of the program, written in C++. We have created a new data type, HelloWorld, that is capable of printing out the words we want. In C++, the keyword class is used to declare a new data type. Our class has three publicly accessible methods, HelloWorld(), ~HelloWorld() and print(). The first two methods have special significance and they are respectively known as a constructor and the destructor. The constructor has the same name as the class. It is an initialization method that will be automatically invoked whenever we create a HelloWorld object. The destructor also has the same name as the class, but with a ~ prefix. It is a finalization method that will be automatically invoked whenever a HelloWorld object is destroyed. In our class, the print() method is the only one that actually does anything useful.
It is important to understand the distinction between a class and an object. A class is merely a template for creating one or more objects. Our main program creates a single object named a based on the class definition that we have provided. We then send the object a “print” message by selecting and invoking the print() method using the . operator. We are able to access the print() method in main() because we have made it a public member function of the class.
#include <stdio.h>
// The HelloWorld class definition.
class HelloWorld {
public:
HelloWorld() {} // Constructor.
~HelloWorld() {} // Destructor.
void print() {
printf(“Hello World!\n”);
}
}; // Note that a semicolon is required here.
// The main progam.
int main() {
HelloWorld a; // Create a HelloWorld object.
a.print(); // Send a “print” message to the object.
return 0;
}
C++ as a Superset of C
Although C++ is generally thought of as an object-oriented language, it does support the procedural progamming paradigm as well. In fact, C++ supports all the features of C in addition to providing new features of its own. For example, a C++ program may include C-style comments that use the /* */ delimiters as well C++-style comments that use the // syntax.
/* C-style comments are also allowed in C++. */
// Alternative comment syntax that is only allowed in C++.
3. C++ Data Types
Built-in Data Types
(Ref. Lippman 3.1, 3.2)
C++ built-in data types are similar to those found in C. The basic built-in types include
- A boolean type: bool (only available in Standard C++).
- Character types: char, unsigned char and wchar_t (wchar_t supports wide characters and is only available in Standard C++).
- Integer types: short (or short int), int (or long int), unsigned short and unsigned int.
- Floating point types: float, double and long double.
Not all computing platforms agree on the actual size of the built-in data types, but the following table indicates the typical sizes on a 32-bit platform:
A literal constant is a constant value of some type. Examples of literal constants are
Note that there is no built-in data type for strings. Strings can be represented as character arrays or by using the string type provided in the Standard C++ library. A string literal constant is of the form
“Hello World!”
User-defined Data Types
We have already seen how we can define new data types by writing a class, and for the most part we will use classes. However, C++ also provides extended support for C-style structures. For example, C++ allows member functions to be packaged in a struct in addition to member data. The most significant difference between a class and a struct is that by default, the members of a class are private, whereas the members of a struct are public.
struct date {
int day;
int month;
int year;
void set_date(int d, int m, int y); // Member functions only allowed in C++.
};
int main() {
struct date a; /* C-style definition. */
date a; // Allowable C++ definition.
}
Pointer Types
(Ref. Lippman 3.3)
Pointer variables (or pointers) are a powerful concept that allow us to manipulate objects by their memory address rather than by their name. It is important to understand pointers clearly since they are used extensively in this course and in real world software.
A pointer must convey two pieces of information, both of which are necessary to access the object that it points to:
- the memory address of the object
- the type of the object
The following example illustrates the use of a pointer to an object of type double. The pointer is defined by the statement
double *p;
p can now hold the memory address of a double object, such as d. We obtain the address of d by applying the address of operator, &d, and we then store it in p. Now that p contains a valid address, we can refer to the object d by applying the dereference operator, *p. Notice that we have used * in two different contexts, with different meanings in each case. The meaning of & also depends on the context in which it is used.
#include <stdio.h>
int main() {
double d; // An double object.
double *p; // A variable that is a pointer to an double.
p = &d; // Take the memory address of d and store it in p.
d = 7.0; // Store a double precision number in d.
printf(“The value of the object d is %lf\n”, d);
printf(“The value of the object that p points to is %lf\n”, *p);
printf(“The address of the object that p points to is %u\n”, p);
}
Here is the output from a trial run:
The value of the object d is 7.000000
The value of the object that p points to is 7.000000
The address of the object that p points to is 4026528296
Reference Types
(Ref. Lippman 3.6)
As an added convenience, C++ provides reference types, which are an alternative way to use the functionality that pointers provide. A reference is just a nickname for existing storage.
The following example defines an integer object, i, and then it defines a reference variable, r, by the statement
int& r = i;
Be careful not to confuse this use of & with the address of operator. Also note that, unlike a pointer, a reference must be initialized at the time it is defined.
#include <stdio.h>
int main() {
int i = 0;
int& r = i; // Create a reference to i.
i++;
printf(“r = %d\n”, r);
}
Explicit Type Conversion
(Ref. Lippman 4.14)
Explicit type conversion can be performed using a cast operator. The following code shows three alternative ways to explicitly convert an int to a float.
int main() {
int a;
float b;
a = 3;
b = (float)a; /* C-style cast operator. */
b = float(a); // Alternative type conversion notation allowed in C++.
b = static_cast<float>(a); // A second alternative, allowed only in Standard C++.
}
const Keyword
(Ref. Lippman 3.5)
The const keyword is used to designate storage whose contents cannot be changed. A const object must be initialized at the time it is defined.
const int i = 10; /* Allowed both in C and C++. */
const int j; /* This is illegal. */
Variable Definitions
In C++, variable definitions may occur practically anywhere within a code block. A code block refers to any chunk of code that lies within a pair of scope delimiters, {}. For example, the following C program requires i and j to be defined at the top of the main() function.
#include <stdio.h>
int main() {
int i, j; /* C requires variable definitions to be at the top of a code block. */
for (i = 0; i < 5; i++) {
printf(“Done with C\n”);
}
j = 10;
}
In the C++ version of the program, we can define the variables i and j when they are first used.
#include <stdio.h>
int main() {
for (int i = 0; i < 5; i++) { // In Standard C++, i is available anywhere within the for loop.
printf(“Still learning C++\n”);
}
int j = 10;
}
4. Expressions
(Ref. Lippman 4.1-4.5, 4.7, 4.8, 4.13, 4.14)
Operator Precedence
An expression consists of one or more operands and a set of operations to be applied to them. The order in which operators are applied to operands is determined by operator precedence. For example, the expression
1 + 4 * 3 / 2 == 7 && !0
is evaluated as
((1 + ((4 * 3) / 2)) == 7) && (!0)
Note that the right hand side of the logical AND operator is only evaluated if the left hand side evaluates to true. (For a table of operator precedence, see Lippman, Table 4.4.)
Arithmetic Conversions
The evaluation of arithmetic expressions follows two general guidelines:
- Wherever necessary, types are promoted to a wider type in order to prevent the loss of precision.
- Integral types (these are the various boolean, character and integer types) are promoted to the int data type prior to evaluation of the expression.
5. Coding Style
Coding styles tend to vary from one individual to another. While you are free to develop your own style, it is important to make your code consistent and readable. Software organizations frequently try to enforce consistency by developing a set of coding guidelines for programmers.
Here is an example of an inconsistent coding style. The curly braces in the two for loops are aligned differently. The second style is usually preferred because it is more compact and it avoids excessive indentation.
#include <stdio.h>
int main() {
int i;
for (i = 0; i < 5; i++)
{
printf(“This convention aligns the curly braces.\n”);
}
for (i =0; i < 5; i++) {
printf(“This is a more compact convention which aligns “);
printf(“the closing brace with the for statement.\n”);
}
}
|
common_crawl_ocw.mit.edu_65
|
Physical Simulation Example
The following example shows how you might integrate a numerical simulation with a Java® animation. Here, we have considered a simple spring-mass-damper system of the form
d2x/dt2 + 2xw0dx/dt + w02 x = 0
and computed the solution numerically using an explicit Forward Euler time-stepping scheme. Try increasing the size of the time step and notice that the simulation becomes unstable when the growth factors become larger than 1. When using explicit time integration, we must therefore choose our time step to be smaller than the critical time step.
Now try using an implicit Backward Euler time-stepping scheme. In this case, the simulation remains stable even for large time steps because the growth factors are always smaller than 1. You may also wish to determine the exact solution to the differential equation and compare it to the numerical solution.
import java.awt.*;
import java.awt.event.*;
import javax.swing.*;
class Vec2D {
private float[] vec;
public Vec2D(float fX, float fV) {
vec = new float[2];
vec[0] = fX;
vec[1] = fV;
}
public void translate(float fDx) {
vec[0] += fDx;
}
public void setPos(float fX) {
vec[0] = fX;
}
public void setVel(float fV) {
vec[1] = fV;
}
public float getPos() {
return vec[0];
}
public float getVel() {
return vec[1];
}
}
class Matrix2D {
private float[][] mat;
public Matrix2D(float a11, float a12, float a21, float a22) {
mat = new float[2][2];
mat[0][0] = a11;
mat[0][1] = a12;
mat[1][0] = a21;
mat[1][1] = a22;
}
public void multiply(Vec2D vec) {
float fX = mat[0][0] * vec.getPos() + mat[0][1] * vec.getVel();
float fV = mat[1][0] * vec.getPos() + mat[1][1] * vec.getVel();
vec.setPos(fX);
vec.setVel(fV);
}
public void invert() {
float det = mat[0][0]*mat[1][1]-mat[0][1]*mat[1][0];
float tmp = mat[0][0];
mat[0][0] = mat[1][1] / det;
mat[1][1] = tmp / det;
mat[0][1] = -mat[0][1] / det;
mat[1][0] = -mat[1][0] / det;
}
}
class Ball {
Vec2D mXState; // x position and velocity.
Vec2D mYState; // y position and velocity.
Matrix2D mMatrix;
int miRadius;
int miWindowWidth, miWindowHeight;
public Ball(int iRadius, int iW, int iH) {
miRadius = iRadius;
mXState = new Vec2D(0.0f, 0.0f);
mYState = new Vec2D(0.0f, 0.0f);
miWindowWidth = iW;
miWindowHeight = iH;
}
public void setPosition(float fXPos, float fYPos) {
mXState.setPos(fXPos);
mYState.setPos(fYPos);
}
public void setVelocity(float fXVel, float fYVel) {
mXState.setVel(fXVel);
mYState.setVel(fYVel);
}
public void setParams(float fXi, float fW0, float fDt, boolean explicit) {
float fReal1 = 0.0f, fImag1 = 0.0f; // First eigenvalue.
float fReal2 = 0.0f, fImag2 = 0.0f; // Second eigenvalue.
float G1, G2; // Growth factors.
// Determine the eigenvalues.
if (fXi < 1.0f) {
fReal1 = fReal2 = -fW0*fXi;
fImag1 = (float)(fW0*Math.sqrt(1-fXi*fXi));
fImag2 = -fImag1;
System.out.println(“System is underdamped.”);
System.out.println(“Eigenvalues are: " + fReal1 + " +/- " + fImag1 + “i”);
}
else {
fReal1 = -fW0*(fXi + (float)Math.sqrt(fXi*fXi-1));
fReal2 = -fW0*(fXi - (float)Math.sqrt(fXi*fXi-1));
System.out.println(“System is overdamped or critically damped.”);
System.out.println(“Eigenvalues are: " + fReal1 + " and " + fReal2 + “i”);
}
if (explicit) {
// Forward Euler.
mMatrix = new Matrix2D(1.0f, fDt, -fW0*fW0*fDt, 1-2*fXi*fW0*fDt);
G1 = (float)Math.sqrt(Math.pow(1+fReal1*fDt,2.0) + Math.pow(fImag1*fDt,2.0));
G2 = (float)Math.sqrt(Math.pow(1+fReal2*fDt,2.0) + Math.pow(fImag2*fDt,2.0));
}
else {
// Backward Euler.
mMatrix = new Matrix2D(1.0f, -fDt, fW0*fW0*fDt, 1+2*fXi*fW0*fDt);
mMatrix.invert();
G1 = (float)(1.0/Math.sqrt(Math.pow(1-fReal1*fDt,2.0) + Math.pow(fImag1*fDt,2.0)));
G2 = (float)(1.0/Math.sqrt(Math.pow(1-fReal2*fDt,2.0) + Math.pow(fImag2*fDt,2.0)));
}
System.out.println(“Growth factors are " + G1 + " and " + G2);
}
public int getXPos() {
return (int)mXState.getPos();
}
public int getYPos() {
return (int)mYState.getPos();
}
public void draw(Graphics g) {
g.setColor(Color.red);
g.fillOval(miWindowWidth/2+(int)mXState.getPos()-miRadius,
miWindowHeight/2+(int)mYState.getPos()-miRadius,
2*miRadius, 2*miRadius);
}
// Update the position of the ball.
void move() {
mMatrix.multiply(mYState);
}
}
public class Animation extends JApplet implements Runnable, ActionListener {
int miFrameNumber = 0;
int miTimeStep;
Thread mAnimationThread;
boolean mbIsPaused = false;
Button mButton;
Ball ball;
public void init() {
// Time step in milliseconds.
miTimeStep = 20; // Try changing this to (a) 50 ms and (b) 60 ms.
// Initialize the parameters of the ball. The parameters refer to the
// differential equation: d^2 x/dt^2 + 2 xi w0 dx/dt + w0^2 x = 0
int iRadius = 15;
float fXPos = 0.0f; // Initial x displacement
float fYPos = 100.0f; // Initial y displacement
float fXVel = 0.0f; // Initial x velocity
float fYVel = 0.0f; // Initial y velocity
float fXi = 0.05f; // xi
float fW0 = 2.0f; // w0
boolean explicit = true; // true: forward Euler, false: backward Euler
ball = new Ball(iRadius, getSize().width, getSize().height);
ball.setPosition(fXPos, fYPos);
ball.setVelocity(fXVel, fYVel);
ball.setParams(fXi, fW0, miTimeStep/1000.0f, explicit);
// Create a button to start and stop the animation.
mButton = new Button(“Stop”);
getContentPane().add(mButton, “North”);
mButton.addActionListener(this);
// Create a JPanel subclass and add it to the JApplet. All drawing
// will be done here, do we must write the paintComponent() method.
// Note that the anonymous class has access to the private data of
// class Animation, because it is defined locally.
getContentPane().add(new JPanel() {
public void paintComponent(Graphics g) {
// Paint the background.
super.paintComponent(g);
// Display the frame number.
g.drawString(“Frame " + miFrameNumber, getSize().width/2 - 40,
getSize().height - 15);
// Draw the rubber band.
g.drawLine(getSize().width/2, 0,
getSize().width/2+ball.getXPos(),
getSize().height/2+ball.getYPos());
// Draw the ball.
ball.draw(g);
}
}, “Center”);
}
public void start() {
if (mbIsPaused) {
// Don’t do anything. The animation has been paused.
} else {
// Start animating.
if (mAnimationThread == null) {
mAnimationThread = new Thread(this);
}
mAnimationThread.start();
}
}
public void stop() {
// Stop the animating thread by setting the mAnimationThread variable
// to null. This will cause the thread to break out of the while loop,
// so that the run() method terminates naturally.
mAnimationThread = null;
}
public void actionPerformed(ActionEvent e) {
if (mbIsPaused) {
mbIsPaused = false;
mButton.setLabel(“Stop”);
start();
} else {
mbIsPaused = true;
mButton.setLabel(“Start”);
stop();
}
}
public void run() {
// Just to be nice, lower this thread’s priority so it can’t
// interfere with other processing going on.
Thread.currentThread().setPriority(Thread.MIN_PRIORITY);
// Remember the starting time.
long startTime = System.currentTimeMillis();
// Remember which thread we are.
Thread currentThread = Thread.currentThread();
// This is the animation loop.
while (currentThread == mAnimationThread) {
// Draw the next frame.
repaint();
// Advance the animation frame.
miFrameNumber++;
// Update the position of the ball.
ball.move();
// Delay depending on how far we are behind.
try {
startTime += miTimeStep;
Thread.sleep(Math.max(0,
startTime-System.currentTimeMillis()));
}
catch (InterruptedException e) {
break;
}
}
}
}
|
common_crawl_ocw.mit.edu_66
|
Topics
1. Introduction
Java® is an object-oriented programming language that resembles C++ in many respects. One of the major differences is that Java® programs are intended to be architecture-neutral i.e. a Java® program should, in theory, be able to run on a Unix® workstation, a PC or a Macintosh® without recompilation. In C++, we compiled our programs into machine-dependent object code that was linked to produce an executable. By contrast, Java® programs are compiled into machine-independent byte code. The compiled program is then run within a Java® interpreter, which is responsible for executing the byte code instructions. The Java® interpreter is typically referred to as the Java® Virtual Machine, and it must be present on each computer that runs the program.
Follow this link to see Sun Microsystems’ overview: About the Java® Technology
It takes time to learn everything about Java® and it is important to set your expectations accordingly. There are two main challenges:
- Learning the basic syntax of the language.
- Gaining familiarity with the libraries of reusable software components that are available to Java® programmers, especially the commonly used parts of the Java® Core API (Application Programming Interface).
In the lectures that follow, we will attempt to familiarize you with the basic syntax, and point out the syntactic and semantic differences between Java® and C++. We will also introduce you to some of the more important class libraries. The Java® API is well documented and you should quickly learn how to navigate the online documentation to find the classes that you need.
The Java® language is still evolving. We will be using the Java® 2 platform, which is also known as the Java® Development Kit (JDK). The latest release is Java® 2 version 1.3. Be prepared to encounter bugs in the implementation of the language from time to time. This includes inconsistencies across hardware platforms. Also note that the latest version of Java® is not supported by the Netscape® browser.
2. Online Java® Resources
Here is an online Java® tutorial.
3. Applications and Applets
An application is a stand-alone program that executes independently of a browser. It is usually launched from the command line, using the command line interpreter, java.
An applet is a program that can be embedded in an HTML page. The program can be run by loading the page into a Java®-enabled browser. The JDK includes a tool, called appletviewer, that can also be used to view applets.
A Java® program can be designed to function
- as an application
- as an applet
- both as an application and as an applet
The Hello World Application
Here is an example of a simple Java® application. We make our program an application by writing a class, HelloWorldApp, that contains a function named main(). We can compile our program by typing
javac HelloWorld.java
This will produce a file named HelloWorldApp.class.
We can run the program as application by typing
java HelloWorldApp
The command line interpreter looks for a function named main() in the HelloWorldApp class and then executes it.
Points to Note
The .class file gets its name from the name of the class and not the name of the source file. In this example we deliberately gave the source file a different name, but in practice, we will place each class in a separate file with the same name. This convention becomes important when we want to write a class that is publicly accessible.
Global functions are not allowed in Java®. This is why we placed our main() function inside our class. We must make our main() function static, since it should not be associated with a particular object, and we must also make it public, since it is the entry point to our program.
HelloWorld.java
class HelloWorldApp {
public static void main(String[] args) {
System.out.println(“Hello World!”);
}
}
The Hello World Applet
Here is an example of a simple Java® applet. We make our program an applet by writing a class, HelloWorld, that inherits the JApplet class provided in the Java® Swing API. The extends keyword declares that class HelloWorld inherits class JApplet. Before we can refer to the JApplet class, we must declare its existence using the import keyword. Here we have imported the JApplet class, which belongs to the javax.swing package, and we have also imported the Graphics class, which belongs to the java.awt package.
The JApplet class possesses a method named paint(), which it inherits from one of its superclasses. Our HelloWorld class inherits this paint() method when it inherits the JApplet class. The purpose of paint() is to draw the contents of the applet. Unfortunately, the default paint() method that we inherit cannot do anything useful since it has no way of knowing what we want to draw. We must therefore override the default paint() in our HelloWorld class. (Note that while C++ requires the use of the virtual keyword to indicate function overriding, Java® does not require us to inform the compiler that overriding will take place.)
The paint() method receives as an argument a Graphics object, which contains information about where and how we can draw. In this example, we choose to draw the text “Hello World!” at coordinates (50,25) by calling the drawString method.
We can compile our program by typing
javac HelloWorld.java
This produces a file named HelloWorld.class. We now embed our applet in an HTML file, Hello.html, and we can run it by typing
appletviewer Hello.html
We can also view Hello.html in a Java®-enabled browser.
HelloWorld.java
import javax.swing.JApplet;
import java.awt.Graphics;
public class HelloWorld extends JApplet {
public void paint(Graphics g) {
g.drawString(“Hello world!”, 50, 25);
}
}
Hello.html
<HTML>
<HEAD>
<TITLE> A Simple Program </TITLE>
</HEAD>
<BODY>
Here is the output of my program:
<APPLET CODE=“HelloWorld.class” WIDTH=150 HEIGHT=25>
</APPLET>
</BODY>
</HTML>
4. Java Basics
Java Data Types
Java® has two main categories of data types: primitive data types and reference data types. Java® does not support the notion of pointers.
Here is a list of primitive data types.
Reference data types include arrays and classes. Here is an example of a Line class.
Line.java
class Line {
private int miX1, miX2, miY1, miY2;
public Line() {
miX1 = miX2 = miY1 = miY2 = 0;
}
public Line(int iX1, int iX2, int iY1, int iY2) {
miX1 = iX1;
miX2 = iX2;
miY1 = iY1;
miY2 = iY2;
}
}
Creating Objects - Constructors
In Java®, objects of user defined data types must be dynamically created. In the following example, the first statement declares a Line object, but does not actually create it. The second statement uses the new operator to actually create the object. Note the subtle differences between Java® and C++.
Line line; // Declaration of object (does not create object.)
line = new Line(); // Instantiation of object.
Garbage Collection and Finalization
The Java® runtime system provides a garbage collector, which periodically destroys any unused objects in dynamic memory. The Java® garbage collector uses a mark-sweep algorithm. The dynamic memory is first scanned for referenced objects and then all remaining objects are treated as garbage. Prior to deleting an object, the garbage collector will call the object’s finalizer, which allows the object to perform an orderly cleanup of any associated system resources, such as open files.
Finalization and garbarge collection happen asynchronously in the background. It is also possible to force these tasks to occur using the System.runFinalization() and System.gc() commands.
A finalizer has the form
protected void finalize() throws Throwable {
…
// Clean up code for this class here.
…
super.finalize(); // Call the superclass’s finalizer (if provided.)
}
Inheritance
As indicated above, the extends keyword allows us to write classes that inherit the properties and methods of another class.
class SubClassName extends SuperClassName {
…
}
If a superclass name is not specified, the superclass is assumed to be java.lang.Object. Also, note that each class can have only one immediate superclass i.e. Java® does not support multiple inheritance.
Packages
A package is a group of related classes or interfaces. Each package defines its own namespace. Thus, two different packages may contain classes with the same name.
We can create a package by placing a package statement at the top of every source file that defines a class belonging to the package. We may later use the classes in the package by placing an import statement at the top of the source file that needs to access the classes in the package.
graphics/Line.java (Path of the file is relative to the CLASSPATH environment variable.)
package graphics; // Class Line belongs to package graphics.
public class Line { // The public class modifier makes this class accessible outside the package.
…
}
MyTest.java
import graphics.*; // Provides access to all public classes in package graphics.
class MyTest {
public static void main(String[] args) {
Line line;
graphics.Line line2; // Can be used for conflict resolution if two packages have a Line class.
line = new Line(0,0,3,4);
line2 = new Line();
}
}
If a package name is not specified for a class, then the class belongs to the default package. The default package has no name and it is always imported.
Here are a few of the core Java® packages:
- java.lang - core Java® language.
- java.io - input/output streams.
- java.util - utility classes, e.g. Stack, Vector, Hashtable, Oberserver/Observable.
- java.net - networking classes.
- java.security - security classes.
- javax.swing - Swing Graphical User Interface (GUI) components (the new preferred way).
- java.awt - Abstract Window Toolkit GUI components (the old way).
- java.awt.image - image processing.
Member Access Specifiers
There are four types of member access levels: private, protected, public and package. Note that, unlike C++, we must specify access levels on a per-member basis.
class Access {
private privateMethod(); // Access level is “private”.
protected protectedMethod(); // Access level is “protected”.
public publicMethod(); // Access level is “public”.
packageMethod(); // Access level is “package”.
}
Instance and Class members
As in C++, we can have instance members or class members. A class member is declared using the static keyword.
class MyPoint {
int x;
int y;
static int x_origin;
static int y_origin;
}
In this example, every object has its own x member, however, all objects share a single x_origin member.
Constant Members
A final variable is one whose value cannot be changed e.g.
class Avo {
final double AVOGADRO = 6.023e23;
}
Class Modifiers
We have already seen some examples of member modifiers, such as public and private. Java® also allows us to specify class modifiers.
-
A public class is one which can be used by objects outside the current package e.g. package graphics; // Class Line belongs to package graphics.
public class Line { // The public class modifier makes this class accessible outside the package.
…
}
-
An abstract class is one which cannot be instantiated, and must be subclassed instead. An abstract class may contain abstract methods i.e. methods with no implementation, however, it may also provide default implementations for other methods. e.g.
abstract class GraphicObject {
int x, y;
…
void moveTo(int newX, int newY) {
…
}
abstract void draw(); // This means that the class must be made abstract.
}class Circle extends GraphicObject {
void draw() {
…
}
}
-
A final class is one which cannot be subclassed. This may be required for security or design reasons. e.g.
final class String {
…
}It is also possible to make individual methods final.
|
common_crawl_ocw.mit.edu_67
|
Topics
public class Point
{
private float mfX, mfY;
public Point() {
mfX = mfY = 0.0f;
}
public Point(float fX, float fY) {
mfX = fX;
mfY = fY;
}
public Point(Point p) {
mfX = p.mfX;
mfY = p.mfY;
}
// You will generally not need to write a finalizer. Member variables that
// are of reference type will be automatically garbage collected once they
// are no longer in use. Finalizers are only for cleaning up system resources,
// e.g. closing files.
protected void finalize() throws Throwable {
System.out.print(“In Point finalizer: “);
print();
super.finalize(); // If you have to write a finalizer, be sure to do this.
}
public void print() {
System.out.println(“Point print: (” + mfX + “,” + mfY + “)”);
}
}
public abstract class Shape
{
private Point mCenter;
protected static int miCount = 0; // An example of a static member variable.
public Shape() {
mCenter = new Point();
}
public Shape(Point p) {
mCenter = new Point(p);
}
// You will generally not need to write a finalizer. Member variables that
// are of reference type (i.e. mCenter) will be automatically garbage collected
// once they are no longer in use. Finalizers are only for cleaning up system
// resources, e.g. closing files.
protected void finalize() throws Throwable {
System.out.print(“In Shape finalizer: “);
print();
super.finalize(); // If you have to write a finalizer, be sure to do this.
}
public void print() {
System.out.print(“Shape print: mCenter = “);
mCenter.print();
}
// An example of a static member function.
public static int getCount() {
return miCount; // Can only access static members in static functions.
}
}
public class Circle extends Shape
{
private float mfRadius;
public Circle() {
super(); // Call the base class constructer.
mfRadius = 0.0f;
miCount++; // Can access this because it is protected in base class.
}
public Circle(float fX, float fY, float fRadius) {
super(new Point(fX, fY)); // Call the base class constructer.
mfRadius = fRadius;
miCount++;
}
public Circle(Point p, float fRadius) {
super(p); // Call the base class constructer.
mfRadius = fRadius;
miCount++;
}
// You will generally not need to write a finalizer. Member variables that
// are of reference type (i.e. mCenter) will be automatically garbage collected
// once they are no longer in use. Finalizers are only for cleaning up system
// resources, e.g. closing files.
protected void finalize() throws Throwable {
System.out.print(“In Circle finalizer: “);
print();
super.finalize(); // If you have to write a finalizer, be sure to do this.
}
public void print() {
System.out.print(“Circle print: mfRadius = " + mfRadius + " “);
super.print();
}
}
public class Square extends Shape
{
private float mfLength;
public Square() {
super(); // Call the base class constructer.
mfLength = 0.0f;
miCount++; // Can access this because it is protected in base class.
}
public Square(float fX, float fY, float fLength) {
super(new Point(fX, fY)); // Call the base class constructer.
mfLength = fLength;
miCount++;
}
public Square(Point p, float fLength) {
super(p); // Call the base class constructer.
mfLength = fLength;
miCount++;
}
// You will generally not need to write a finalizer. Member variables that
// are of reference type (i.e. mCenter) will be automatically garbage collected
// once they are no longer in use. Finalizers are only for cleaning up system
// resources, e.g. closing files.
protected void finalize() throws Throwable {
System.out.print(“In Square finalizer: “);
print();
super.finalize(); // If you have to write a finalizer, be sure to do this.
}
public void print() {
System.out.print(“Square print: mfLength = " + mfLength + " “);
super.print();
}
}
public class Main
{
final static int MAX = 3; // An example of a constant class member variable.
public static void main(String[] args)
{
// Create some Point objects.
Point a;
a = new Point();
a.print();
Point b;
b = new Point(2,3);
b.print();
Point c = new Point(b);
c.print();
// Print out the total number of Shapes created so far. At this point,
// no Shapes have been created, however, we can still access static member
// function Shape.getCount().
System.out.println(“Total number of Shapes = " + Shape.getCount());
// Create a Circle object and hold on to it using a Shape reference.
Shape s;
s = new Circle(a,1);
s.print(); // This will call the print method in Circle.
// Create an array of Shapes.
Shape[] shapeArray;
shapeArray = new Shape[MAX]; // An array of Shape references.
shapeArray[0] = new Square();
shapeArray[1] = new Circle(4,5,2);
shapeArray[2] = new Square(3,3,1);
// Print out the array of Shapes. The length member gives the array size.
for (int i = 0; i < shapeArray.length; i++) {
shapeArray[i].print();
}
// Print out the total number of Shapes created so far. At this point,
// 4 Shapes have been created.
System.out.println(“Total number of Shapes = " + Shape.getCount());
// We can mark the objects for destruction by removing all references to
// them. Normally, we do not need to call the garbage collector explicitly.
// Note: here we have not provided a way to decrement the Shape counter.
a = b = c = null;
s = null;
for (int i = 0; i < shapeArray.length; i++) {
shapeArray[i] = null;
}
shapeArray = null;
}
}
|
common_crawl_ocw.mit.edu_68
|
Topics
- Introduction
- Performance Criteria
- Selection Sort
- Insertion Sort
- Shell Sort
- Quicksort
- Choosing a Sorting Algorithm
1. Introduction
Sorting techniques have a wide variety of applications. Computer-Aided Engineering systems often use sorting algorithms to help reason about geometric objects, process numerical data, rearrange lists, etc. In general, therefore, we will be interested in sorting a set of records containing keys, so that the keys are ordered according to some well defined ordering rule, such as numerical or alphabetical order. Often, the keys will form only a small part of the record. In such cases, it will usually be more efficient to sort a list of keys without physically rearranging the records.
2. Performance Criteria
There are several criteria to be used in evaluating a sorting algorithm:
- Running time. Typically, an elementary sorting algorithm requires O(N2) steps to sort N randomly arranged items. More sophisticated sorting algorithms require O(N log N) steps on average. Algorithms differ in the constant that appears in front of the N2 or N log N. Furthermore, some sorting algorithms are more sensitive to the nature of the input than others. Quicksort, for example, requires O(N log N) time in the average case, but requires O(N2) time in the worst case.
- Memory requirements. The amount of extra memory required by a sorting algorithm is also an important consideration. In place sorting algorithms are the most memory efficient, since they require practically no additional memory. Linked list representations require an additional N words of memory for a list of pointers. Still other algorithms require sufficent memory for another copy of the input array. These are the most inefficient in terms of memory usage.
- Stability. This is the ability of a sorting algorithm to preserve the relative order of equal keys in a file.
Examples of elementary sorting algorithms are: selection sort, insertion sort, shell sort and bubble sort. Examples of sophisticated sorting algorithms are quicksort, radix sort, heapsort and mergesort. We will consider a selection of these algorithms which have widespread use. In the algorithms given below, we assume that the array to be sorted is stored in the memory locations a[1],a[2],…,a[N]. The memory location a[0] is reserved for special keys called sentinels, which are described below.
3. Selection Sort
This “brute force’’ method is one of the simplest sorting algorithms.
Approach
- Find the smallest element in the array and exchange it with the element in the first position.
- Find the second smallest element in the array and exchange it with the element in the second position.
- Continue this process until done.
Here is the code for selection sort:
Selection.cpp
#include “Selection.h” // Typedefs ItemType.
inline void swap(ItemType a[], int i, int j) {
ItemType t = a[i];
a[i] = a[j];
a[j] = t;
}
void selection(ItemType a[], int N) {
int i, j, min;
for (i = 1; i < N; i++) {
min = i;
for (j = i+1; j <= N; j++)
if (a[j] < a[min])
min = j;
swap(a,min,i);
}
}
Selection sort is easy to implement; there is little that can go wrong with it. However, the method requires O(N2) comparisons and so it should only be used on small files. There is an important exception to this rule. When sorting files with large records and small keys, the cost of exchanging records controls the running time. In such cases, selection sort requires O(N) time since the number of exchanges is at most N.
4. Insertion Sort
This is another simple sorting algorithm, which is based on the principle used by card players to sort their cards.
Approach
- Choose the second element in the array and place it in order with respect to the first element.
- Choose the third element in the array and place it in order with respect to the first two elements.
- Continue this process until done.
Insertion of an element among those previously considered consists of moving larger elements one position to the right and then inserting the element into the vacated position.
Here is the code for insertion sort:
Insertion.cpp
#include “Insertion.h” // Typedefs ItemType.
void insertion(ItemType a[], int N) {
int i, j;
ItemType v;
for (i = 2; i <= N; i++) {
v = a[i];
j = i;
while (a[j-1] > v) {
a[j] = a[j-1];
j–;
}
a[j] = v;
}
}
It is important to note that there is no test in the while loop to prevent the index j from running out of bounds. This could happen if v is smaller than a[1],a[2],…,a[i-1]. To remedy this situation, we place a sentinel key in a[0], making it at least as small as the smallest element in the array. The use of a sentinel is more efficient than performing a test of the form while (j > 1 && a[j-1] > v). Insertion sort is an O(N2) method both in the average case and in the worst case. For this reason, it is most effectively used on files with roughly N < 20. However, in the special case of an almost sorted file, insertion sort requires only linear time.
5. Shell Sort
This is a simple, but powerful, extension of insertion sort, which gains speed by allowing exchanges of non-adjacent elements.
Definition
An h-sorted file is one with the property that taking every _h_th element (starting anywhere) yields a sorted file.
Approach
- Choose an initial large step size, hK, and use insertion sort to produce an hK-sorted file.
- Choose a smaller step size, hK-1, and use insertion sort to produce an hK-1-sorted file, using the hK-sorted file as input.
- Continue this process until done. The last stage uses insertion sort, with a step size h1 = 1, to produce a sorted file.
Each stage in the sorting process brings the elements closer to their final positions. The method derives its efficiency from the fact that insertion sort is able to exploit the order present in a partially sorted input file; input files with more order to them require a smaller number of exchanges. It is important to choose a good sequence of increments. A commonly used sequence is (3K-1)/2,…,121,40,13,4,1, which is obtained from the recurrence hk = 3 hk+1+1. Note that the sequence obtained by taking powers of 2 leads to bad performance because elements in odd positions are not compared with elements in even positions until the end.
Here is the complete code for shell sort:
Shell.cpp
#include “Shell.h” // Typedefs ItemType.
void shell(ItemType a[], int N) {
int i, j, h;
ItemType v;
for (h = 1; h < = N/9; h = 3*h+1);
for (; h > 0; h /= 3)
for (i = h+1; i <= N; i++) {
v = a[i];
j = i;
while (j > h && a[j-h] > v) {
a[j] = a[j-h];
j -= h;
}
a[j] = v;
}
}
Shell sort requires O(N3/2) operations in the worst case, which means that it can be quite effectively used even for moderately large files (say N < 5000).
6. Quicksort
This divide and conquer algorithm is, in the average case, the fastest known sorting algorithm for large values of N. Quicksort is a good general purpose method in that it can be used in a variety of situations. However, some care is required in its implementation. Since the algorithm is based on recursion, we assume that the array (or subarray) to be sorted is stored in the memory locations a[left],a[left+1],…,a[right]. In order to sort the full array, we simply initialize the algorithm with left = 1 and right = N.
Approach
- Partition the subarray a[left],a[left+1],…,a[right] into two parts, such that
- element a[i] is in its final place in the array for some i in the interval [left,right].
- none of the elements in a[left],a[left+1],…,a[i-1] are greater than a[i].
- none of the elements in a[i+1],a[i+2],…,a[right] are less than a[i].
- Recursively partition the two subarrays, a[left],a[left+1],…,a[i-1] and a[i+1],a[i+2],…,a[right], until the entire array is sorted.
How to partition the subarray a[left],a[left+1],…,a[right]:
- Choose a[right] to be the element that will go into its final position.
- Scan from the left end of the subarray until an element greater than a[right] is found.
- Scan from the right end of the subarray until an element less than a[right] is found.
- Exchange the two elements which stopped the scans.
- Continue the scans in this way. Thus, all the elements to the left of the left scan pointer will be less than a[right] and all the elements to the right of the right scan pointer will be greater than a[right].
- When the scan pointers cross we will have two new subarrays, one with elements less than a[right] and the other with elements greater than a[right]. We may now put a[right] in its final place by exchanging it with the leftmost element in the right subarray.
Here is the complete code for quicksort:
Quicksort.cpp
// inline void swap() is the same as for selection sort.
void quicksort(ItemType a[], int left, int right) {
int i, j;
ItemType v;
if (right > left) {
v = a[right];
i = left - 1;
j = right;
for (;;) {
while (a[++i] < v);
while (a[–j] > v);
if (i >= j) break;
swap(a,i,j);
}
swap(a,i,right);
quicksort(a,left,i-1);
quicksort(a,i+1,right);
}
}
Note that this code requires a sentinel key in a[0] to stop the right-to-left scan in case the partitioning element is the smallest element in the file. Quicksort requires O(N log N) operations in the average case. However, its worst case performance is O(N2), which occurs in the case of an already sorted file! There are a number of improvements which can be made to the basic quicksort algorithm.
- Using the median of three partitioning method makes the worst case far less probable, and it eliminates the need for sentinels. The basic idea is as follows. Choose three elements, a[left], a[middle] and a[right], from the left, middle and right of the array. Sort them (by direct comparison) so that the median of the three is in a[middle] and the largest is in a[right]. Now exchange a[middle] with a[right-1]. Finally, we run the partitioning algorithm on the subarray a[left+1],a[left+2],…,a[right-2] with a[right-1] as the partitioning element.
- Another improvement is to remove recursion from the algorithm by using an explicit stack. The basic idea is as follows. After partitioning, push the larger subfile onto the stack. The smaller subfile is processed immediately by simply resetting the parameters left and right (this is known as end-recursion removal). With the explicit stack implementation, the maximum stack size is about log2 N. On the other hand, with the recursive implementation, the underlying stack could be as large as N.
- A third improvement is to use a cutoff to insertion sort whenever small subarrays are encountered. This is because insertion sort, albeit an O(N2) algorithm, has a sufficiently small constant in front of the N2 to be more efficient than quicksort for small N. A suitable value for the cutoff subarray size would be approximately in the range 5 ~ 25.
7. Choosing a Sorting Algorithm
Table 1 summarizes the performance characteristics of some common sorting algorithms. Shell sort is usually a good starting choice for moderately large files N < 5000, since it is easily implemented. Bubble sort, which is included in Table 1 for comparison purposes only, is generally best avoided. Insertion sort requires linear time for almost sorted files, while selection sort requires linear time for files with large records and small keys. Insertion sort and selection sort should otherwise be limited to small files. Quicksort is the method to use for very large sorting problems. However, its performance may be significantly affected by subtle implementation errors. Furthermore, quicksort performs badly if the file is already sorted. Another possible disadvantage is that quicksort is not stable i.e. it does not preserve the relative order of equal keys. All of the above sorting algorithms are in-place methods. Quicksort requires a small amount of additional memory for the auxiliary stack. There are a few other sorting methods which we have not considered. Heapsort requires O(N log N) steps both in the average case and the worst case, but it is about twice as slow as quicksort on average. Mergesort is another O(N log N) algorithm in the average and worst cases. Mergesort is the method of choice for sorting linked lists, where sequential access is required.
Table 1: Approximate running times for various sorting algorithms
|
common_crawl_ocw.mit.edu_69
|
Topics
1. CVS Resources
Where to get CVS
The latest version of CVS can be obtained by anonymous ftp:
ftp://prep.ai.mit.edu/pub/gnu/
CVS references on the Web
2. Introduction
CVS (Concurrent Versions System) is a version control system which allows multiple software developers to work on a project concurrently. It maintains a single master copy of the source-code, which is called the source repository. Individual developers may obtain a working copy by checking out a snapshot of the source repository. This working copy can be edited without affecting other developers, and once the changes are complete, CVS assists in merging the changes into the source repository. CVS supports parallel development efforts through branches, and it provides mechanisms for merging these branches back together when desired. It also provides the facility to tag the state of the directory tree at any given point so that that state can be recreated at a later time.
CVS runs under both Unix® and Windows® 95/NT environments, and it can be run as a client-server application with the source repository residing on a central Unix® server and the clients running on either Unix® or Windows® machines. The source repository may be accessed using a command line interface or through a web interface. Security provisions include simple password protection as well as Kerberos encryption.
CVS is really a front end to the slightly more primitive RCS revision control system.
3. Environment Variable
The CVSROOT environment variable needs to be set up to point to the source repository e.g.
setenv CVSROOT /mit/1.124/mysrc
Note
For a CVS password server running on a remote machine with the source repository located in /mysrc, the environment variable is set as follows:
setenv CVSROOT :pserver:USER@HOSTNAME:/mysrc
in which case, the user must log in using
cvs login
For a Kerberos-authenticated server, we would need to use kserver instead of pserver.
4. Setting Up a New Repository
Once the CVSROOT environment has been set to point to the desired location, we can create a new repository using
cvs init
This operation only needs to be done once.
5. Importing a Project Into CVS
A project which was previously not under CVS control can be imported into the CVS repository using the cvs import command. Be sure to change directory to the top level project directory first.
e.g.
cd ~/MyProject
cvs import -m"Importing project into CVS." Projects/MyProject vendortag releasetag
will create the directory $CVSROOT/Projects/MyProject in the CVS repository, and import the local project files into this directory. vendortag could be your name, and releasetag could be the string “start”.
This operation only needs to be done once.
6. Routine CVS Operations on Source Files
Checking out the Project
cvs co Projects/MyProject
will make a local working copy of the repository files, with the same directory structure. There are several variants on this command. For example, instead of checking out a directory, one can check out a CVS module, which defines a collection of files and directories. CVS modules are defined in the $CVSROOT/modules file.
One can also check out a specific revision or tag using the -r option e.g.
cvs co -r1.3 Project/MyProject/myfile.C
cvs co -rMyTag Project/MyProject
It is also possible to check out the project as of a particular date and time:
cvs co -D"11/23/97 16:00:00 EST" Project/MyProject
This command does not affect the source repository.
Updating a File
Before a set of changes can be committed to the source respository, all files must be brought up to date. Files can become out of date if someone else commited changes after the working copy was checked out. To update all files in the project, type
cd Project/MyProject
cvs up
Individual files may also be updated e.g.
cvs up myfile.C
The update command will not throw away any local changes that have been made. Instead it will attempt to merge them with the changes that were retrieved from the repository. In some cases, thr merge will fail and CVS will report a conflict. If this happens, the conflicts will have to be resolved by editing the portions of the source file that are in conflict. Conflicts can be detected by searching for the <<< sequence.
The -r option can also be used with the update command.
cvs up -r1.3 myfile.C
Note that the -r option to cvs co or cvs up will cause a sticky tag to be set (you can check using cvs stat.) To remove the sticky tag, and update to the latest revision, use
cvs up -A myfile.C
This command does not affect the source repository.
Looking at Differences
You can examine the local changes you are about to commit using the cvs diff command. e.g.
cvs diff myfile.C
This command does not affect the source repository.
Committing Changes to the Source Repository
Once you are sure that your changes are ready to be committed, use the cvs commit command. e.g.
cvs commit -m"This is my log message." myfile.C
This command affects the source repository!
Examining the Version History of a File
Use cvs log to examine the log entries for a particular file. e.g.
cvs log myfile.C
This command does not affect the source repository.
Examining the Status of a File
The current status of a file can be examined using the cvs stat command. This is useful for checking whether or not the file is up to date.
cvs stat myfile.C
This command does not affect the source repository.
Adding a New File
A new file can be added to the project using the cvs add command. Unlike cvs import, the cvs add command should be used when the project is already under CVS control, and has already been checked out. e.g.
cd Project/MyProject
cvs add newfile.C
cvs commit -m"Added a new file." newfile.C
This command affects the source repository!
Removing an Unwanted File
If a file is no longer necessary, it can be removed from the project using the cvs rm command. e.g.
cd Project/MyProject
cvs rm oldfile.C
cvs commit -m"Removed an unwanted file." oldfile.C
This causes the file to be retired. It will still be kept in the source repository in a subdirectory called Attic, in case it ever needs to be resurrected.
This command affects the source repository!
7. Tagging, Branching and Merging
Tagging
Once the source tree has reached a stable state, it is a good idea to tag the tree, so that the stable state can be recreated. e.g.
cd Project/MyProject
cvs tag Release_1_0
The tree can then continue to be modified, but the stable state can always be recovered using
cvs co -rRelease_1_0 Project/MyProject
Branching
A parallel branch can be created by using the cvs tag command with the -b option. e.g.
cd Project/MyProject
cvs tag BigExperiment_BASE
cvs tag -b BigExperiment_BRANCH
Developers who wish to work on the branch will then check out the branch:
cvs co -rBigExperiment_BRANCH Project/MyProject
All changes will then be committed to the branch. In the meantime, development can also proceed on the trunk by doing a normal check out without the -r option.
Merging
The changes on the branch can be merged back with the trunk as follows:
cvs co Project/MyProject
cvs tag BeforeBigMerge
cvs up -jBigExperiment_BRANCH
<Resolve any conflicts here.>
cvs commit -m"Merged in the big experiment."
The merging process should be handled with care, since it is easy to make mistakes. It is possible to do more complicated merges, such as merging just a portion of the branch with the trunk. For more details, visit one of the CVS resources listed above.
|
common_crawl_ocw.mit.edu_70
|
Static Member Data and Static Member Functions
(Ref. Lippman 13.5)
The Point class that we have developed has both member data (properties) and member functions (methods). Each object that we create will have its own variables mfX, and mfY, whose values can vary from one Point to another. In order to access a member function, we must have created an object first e.g. if we want to write
a.print();
then the object a must already exist.
Suppose now that we wish to have a counter that will keep track of the number of Point objects that we create. It does not make sense for each Point object to have its own copy of the counter, since the counter will have the same value regardless of which object we are referring to. We would rather have a single integer variable that is shared by all objects of the class. We can do this by creating a static member variable as the counter. What if we wish to provide a member function to query the counter? We would not be able to access the member function unless we have created at least one object. We would rather have a function that is associated with the class itself and not with any object. We can do this by creating a static member function. The following example illustrates this.
point.h
// Declaration of class Point.
#ifndef _POINT_H_
#define _POINT_H_
#include <iostream.h>
class Point {
// The state of a Point object. Property variables are typically
// set up as private data members, which are read from and
// written to via public access methods.
private:
float mfX;
float mfY;
static int miCount;
// The behavior of a Point object.
public:
Point(float fX=0, float fY=0); // A constructor that takes two floats.
~Point(); // The destructor.
static int get_count();
// …
};
#endif // _POINT_H_
point.C
// Definition of class Point.
#include “point.h”
// Initialize the counter.
int Point::miCount = 0;
// A constructor which creates a Point object from two floats.
Point::Point(float fX, float fY) {
cout << “In constructor Point::Point(float,float)” << endl;
mfX = fX;
mfY = fY;
miCount++;
}
// The destructor.
Point::~Point() {
cout << “In destructor Point::~Point()” << endl;
miCount–;
}
// Accessor for the counter variable.
int Point::get_count() {
return miCount;
}
point_test.C
#include “point.h”
int main() {
cout << Point::get_count() << endl; // We don’t have any Point objects yet!
Point a;
Point *b = new Point(1.0, 2.0);
cout << b->get_count() << endl; // This is allowed, since *b exists.
delete b;
cout << a.get_count() << endl; // This is allowed, since a exists.
return 0;
}
|
common_crawl_ocw.mit.edu_71
|
Topics
1. Introduction
Templates allow us to write functions and classes that are based on parameterized types. For example, we may wish to write a function or class to run quicksort on (1) an array of _int_s and (2) an array of _float_s. Rather than writing two separate versions, one for _int_s and one for _float_s, we may write a single generic template from which the compiler can generate the int and float versions of quicksort.
2. Function Templates
The following example illustrates how to use function templates.
FunctionTemplates.cpp
#include <iostream.h>
// A function template for creating functions that reverse the order of the elements in an array.
template<typename ItemType>
void reverse(ItemType a[], int N) {
for (int i = 0; i < N/2; i++) {
ItemType tmp = a[i];
a[i] = a[N-1-i];
a[N-1-i] = tmp;
}
}
// A function template, where the type cannot be inferred from the function arguments.
template<typename ItemType>
void print(void *p, int N) {
ItemType *a = (ItemType *)p;
for (int i = 0; i < N; i++)
cout << “Element " << i << " is " << a[i] << endl;
}
// Optional: you are allowed to explicitly instantiate the function templates, if you wish. If you don’t
// do this, the instantiation will occur implicitly as a result of the functions calls below.
template void print<int>(void *, int);
template void print<float>(void *, int);
const int aLength = 5;
const int bLength = 10;
int main() {
int i;
int a[aLength];
float b[bLength];
for (i = 0; i < aLength; i++)
a[i] = i;
for (i = 0; i < bLength; i++)
b[i] = (float)i;
// The compiler will create two versions of reverse(), one to handle ints and one to handle floats.
// In this case, ItemType can be inferred from the first argument.
reverse(a, aLength);
reverse(b, bLength);
// The compiler will create two versions of print(), one to handle ints and one to handle floats.
// In this case, ItemType cannot be inferred from the function arguments. Hence, explicit
// specification of the parameter is required. (VC++ users note: VC++ 6.0 has a bug which
// causes it to use the float version in both cases.)
print<int>((void *)a, aLength);
print<float>((void *)b, bLength);
return 0;
}
3. Class Templates
The following example illustrates how to use class templates.
ArrayClass.h
#include <iostream.h>
// This class template allows us to create array objects of any type and size.
template<typename ItemType, int size>
class ArrayClass {
private:
ItemType array[size];
public:
ArrayClass();
~ArrayClass() {}
void print();
};
// In a class template, all member function definitions should be placed in the header file.
template<typename ItemType, int size>
ArrayClass<ItemType, size>::ArrayClass() {
for (int i = 0; i < size; i++) {
array[i] = (ItemType)(i/2.0); // The chosen default behavior.
}
}
template<typename ItemType, int size>
void ArrayClass<ItemType, size>::print() {
for (int i = 0; i < size; i++) {
cout << array[i] << endl;
}
}
Main.cpp
#include “ArrayClass.h”
int main() {
ArrayClass<int, 5> a;
ArrayClass<float, 10> b;
a.print();
cout << endl;
b.print();
return 0;
}
|
common_crawl_ocw.mit.edu_72
|
Topics
1. Loading and Displaying Images
(Ref. Java® Tutorial__)
Images provide a way to augment the aethetic appeal of a Java program. Java® provides support for two common image formats: GIF and JPEG. An image that is in one of these formats can be loaded by using either a URL or a filename.
The basic class for representing an image is java.awt.Image. Packages that are relevant to image handling are java.applet, java.awt and java.awt.image.
Loading an Image
Images can be loaded using the getImage() method. There are several versions of getImage(). When we create an applet by subclassing javax.swing.JApplet, we inherit the following methods from java.awt.Applet.
- Image getImage(URL url)
- Image getImage(URL url, String name)
These methods only work after the applet’s constructor has been called. A good place to call them is in the applet’s init() method. Here are some examples:
// In a method in an Applet subclass, such as the init() method:
Image image1 = getImage(getCodeBase(), “imageFile.gif”);
Image image2 = getImage(getDocumentBase(), “anImageFile.jpeg”);
Image image3 = getImage(new URL(“http://java.sun.com/graphics/people.gif"));
In the first example, the code base is the URL of the directory that contains the applets .class file. In the second example, the document base is the URL of the directory containing the HTML document that loads the applet.
Alternatively, we may use the getImage() methods provided by the Toolkit class.
- Image getImage(URL url)
- Image getImage(String filename)
This approach can be used either in an applet or an application. e.g.
Toolkit toolkit = Toolkit.getDefaultToolkit();
Image image1 = toolkit.getImage(“imageFile.gif”);
Image image2 = toolkit.getImage(new URL(“http://java.sun.com/graphics/people.gif"));
In general, applets cannot read files that are on the local machine for reasons of security. Thus, applets typically download any images they need from the server.
Note that getImage() returns immediately without waiting for the image to load. The image loading process occurs lazily, in that the image doesn’t start to load until the first time we try to display it.
Displaying an Image
Images can be displayed by calling one of the drawImage() methods supplied by the Graphics object that gets passed in to the paintComponent() method.
This version draws an image at the specified position using its natural size:
boolean drawImage(Image img, int x, int y, ImageObserver observer)
This version draws an image at the specified position, and scales it to the specified width and height:
boolean drawImage(Image img, int x, int y, int width, int height, ImageObserver observer)
The ImageObserver is a mechanism for tracking the loading of an image (see below). One of the uses for an ImageObserver is to ensure that the image is properly displayed once it has finished loading. The return value from drawImage() is rarely used: this value is true if the image has been completely loaded and thus completely painted, and false otherwise.
Here is a simple example of loading and displaying images.
import java.awt.*;
import java.awt.event.*;
import javax.swing.*;
// This applet displays a single image twice,
// once at its normal size and once much wider.
public class ImageDisplayer extends JApplet {
static String imageFile = “images/rocketship.gif”;
public void init() {
Image image = getImage(getCodeBase(), imageFile);
ImagePanel imagePanel = new ImagePanel(image);
getContentPane().add(imagePanel, BorderLayout.CENTER);
}
}
class ImagePanel extends JPanel {
Image image;
public ImagePanel(Image image) {
this.image = image;
}
public void paintComponent(Graphics g) {
super.paintComponent(g); // Paint background
// Draw image at its natural size first.
g.drawImage(image, 0, 0, this); //85x62 image
// Now draw the image scaled.
g.drawImage(image, 90, 0, 300, 62, this);
}
}
2. Tracking Image Loading
The most frequent reason to track image loading is to find out when an image or group of images is fully loaded. At a minimum, we will want to make sure that each image is redrawn after it finishes loading, otherwise only a part of the image will be visible. We may even wish to wait until image loading is complete, before attempting to do any drawingat all. There are two ways to track images: using the MediaTracker class and by implementing the ImageObserver interface.
Media Trackers
(Ref. Java® Tutorial)
The MediaTracker class provides a relatively simple way to delay drawing until the image loading process is complete. We can modify the ImageDisplayer applet to perform the following steps:
- Create a MediaTracker object.
- Add the image to it using the addImage() method. (If we had several images, they would all be added to the same MediaTracker.)
- Use the waitForAll() method to load the image data synchronously when the program starts up.
- Use the checkAll() method in the paintComponent() method to test whether image loading is complete.
import java.awt.*;
import java.awt.event.*;
import javax.swing.*;
// This applet displays a single image twice,
// once at its normal size and once much wider.
public class ImageDisplayer extends JApplet {
static String imageFile = “images/rocketship.gif”;
public void init() {
Image image = getImage(getCodeBase(), imageFile);
// Create a media tracker and add the image to it. If we had several
// images to load, they could all be added to the same media tracker.
MediaTracker tracker = new MediaTracker(this);
tracker.addImage(image, 0);
// Start downloading the image and wait until it finishes loading.
try {
tracker.waitForAll();
}
catch(InterruptedException e) {}
ImagePanel imagePanel = new ImagePanel(image, tracker);
getContentPane().add(imagePanel, BorderLayout.CENTER);
}
}
class ImagePanel extends JPanel {
Image image;
MediaTracker tracker;
public ImagePanel(Image image, MediaTracker tracker) {
this.image = image;
this.tracker = tracker;
}
public void paintComponent(Graphics g) {
super.paintComponent(g); // Paint background
// Check that the image has loaded before trying to draw it.
if (!tracker.checkAll()) {
g.drawString(“Please wait…”, 0, 0);
return;
}
// Draw image at its natural size first.
g.drawImage(image, 0, 0, this); //85x62 image
// Now draw the image scaled.
g.drawImage(image, 90, 0, 300, 62, this);
}
}
Image Observers
Image observers provide a way to track image loading even more closely. In order to track image loading, we must pass in an object that implements the ImageObserver interface as the last argument to the Graphics object’s drawImage() method. The ImageObserver interface has a method named
imageUpdate(Image img, int flags, int x, int y, int width, int height)
which will be called whenever an interesting milestone in the image loading process is reached. The flags argument can be examined to determine exactly what this milestone is. The ImageObserver interface defines the following constants, against which the flags argument can be tested using the bitwise AND operator:
public static final int WIDTH;
public static final int HEIGHT;
public static final int PROPERTIES;
public static final int SOMEBITS;
public static final int FRAMEBITS;
public static final int ALLBITS;
public static final int ERROR;
public static final int ABORT;
The java.awt.Component class implements the ImageObserver interface and provides a default version of imageUpdate(), which calls repaint() when the image has finished loading. The following example shows how we could modify the ImageDisplayer applet, so that the ImagePanel class provides its own version of imageUpdate() instead of using the one that it inherits from java.awt.Component. Note that we pass this as the last argument to drawImage().
import java.awt.*;
import java.awt.event.*;
import java.awt.image.ImageObserver;
import javax.swing.*;
// This applet displays a single image twice,
// once at its normal size and once much wider.
public class ImageDisplayer extends JApplet {
static String imageFile = “images/rocketship.gif”;
public void init() {
Image image = getImage(getCodeBase(), imageFile);
ImagePanel imagePanel = new ImagePanel(image);
getContentPane().add(imagePanel, BorderLayout.CENTER);
}
}
class ImagePanel extends JPanel implements ImageObserver {
Image image;
public ImagePanel(Image image) {
this.image = image;
}
public void paintComponent(Graphics g) {
super.paintComponent(g); // Paint background
// Draw image at its natural size first.
g.drawImage(image, 0, 0, this); //85x62 image
// Now draw the image scaled.
g.drawImage(image, 90, 0, 300, 62, this);
}
public boolean imageUpdate(Image image, int flags, int x, int y,
int width, int height) {
// If the image has finished loading, repaint the window.
if ((flags & ALLBITS) != 0) {
repaint();
return false; // Return false to say we don’t need further notification.
}
return true; // Image has not finished loading, need further notification.
}
}
3. Image Animations
(Ref. Java® Tutorial: Moving an Image Across the Screen and Displaying a Sequence of Images)
Moving an Image Across the Screen
The simplest type of image animation involves moving a single frame image across the screen. This is known as cutout animation, and it is accomplished by repeatedly updating the position of the image in an animation thread, in a similar fashion to the bouncing ball animation we saw earlier.
Displaying a Sequence of Images
Another type of image animation is cartoon style animation, in which a sequence of image frames is displayed in succession. The following example does this by creating an array of ten Image objects and then incrementing the array index every time the paintComponent() method is called.The portion of code that is of main interest is:
// In initialization code.
Image[] images = new Image[10];
for (int i = 1; i <= 10; i++) {
images[i-1] = getImage(getCodeBase(), “images/duke/T”+i+".gif”);
}
// In the paintComponent method.
g.drawImage(images[ImageSequenceTimer.frameNumber % 10], 0, 0, this);
This is a good example of why a MediaTracker should be used to delay drawing until after all the images have loaded.
4. Examples
Here are some more examples of image animation:
Check out Code Samples and Applets for other interesting applets.
|
common_crawl_ocw.mit.edu_73
|
These notes were prepared by Petros Komodromos.
Topics
- Course goals & content, references, recitations
- Compilation
- Debugging
- Makefiles
- Concurrent Versions System (CVS)
- Introduction to C++
- Data Types
- Variable Declarations and Definitions
- Operators
- Expressions and Statements
- Input/Output Operators
- Preprocessor directives
- Header files
- Control Structures
1. Course Goals and Content
- C++:
Procedural and Object Oriented Programming
Data structures
Software design and implementation
- **Algorithms:
** sorting (insertion, selection, mergesort, quicksort, shellsort, hashing)
searching (linear search, binary search, binary search trees)
- Java®:
Object Oriented Programming (OOP) using Java®, Java® application and applets,
Graphical User Interfaces, Graphics using Java®
- **Advanced Topics:
** Geometric algorithms, Java®3D, Database design using JDBC, etc.
- Term Project
Textbooks - References
- C++
- C++ Primer. Lippman and Lajoie. 3rd edition (required).
- C++ How to program. Deitel & Deitel. 3rd edition.
- The C++ programming language. Bjarne Stroustrup. 3rd edition.
- On to C++. H. Winston. 2nd edition.
- Object Oriented Programming in C++. Johnsonbaugh & Kalin.
- C++ FAQ. Cline/Lomow.
- Algorithms
- Algorithms in C++. Sedgewick (required).
- Algorithms, Data Structures and Problem Solving with C++. Weiss.
- Introduction to Algorithms. Cormen, Leiserson, and Rivest.
- Java®
- The Java® Tutorial. Mary Campione and Kathy Walrath (required).
- Core Java®. Gary Cornell and Cay Horstmann. 2nd edition.
- The Java® programming language. Ken Arnold and James Gosling. 2nd edition.
- Java®: How to program. Deitel & Deitel. 2nd edition.
Problem Sets
You need to follow the instructions that are provided with each problem set, concerning what you must submit. In all problem sets you must both electronically turnin the source code files, and submit hardcopies of all completed, or modified, source code files. Sometimes you may also need to provide screen dumps of the window with the output results from the execution of your programs.
The problem statement and the provided source code files can be obtained using CVS, which is a version control system (covered later in this recitation). Whenever source code files are provided, the following naming convention is used:
For each question there is a ps<number_of_problem_set>_<number_of_question>.C which you may need to use, e.g. ps3_2.C for question 2 of problem set 3.
In some cases a makefile (named make<number_of_problem_set>) is provided, which you may use to compile and link your code.
Please comment your code to make it more readable whenever you think it would be helpful for someone else to understand what and how you do it (i.e. for the graders). Comments may be incorporated in your code by either enclosing them between /* and */, or, by putting them on the right side of two division symbols //.
It is also very useful for both yourselves, and the graders, to indent your code in order to emphasize loops and different parts of your code. You should indent your code so that different blocks start in different columns making it less obscure and difficult to understand.
Please, do not make any other changes to the provided code, except those that you are asked to make.
You can print a file in a compact form (saving some paper) using the following command so as to have the name of the file and the time and date printed on the hardcopy.
athena% enscript -2Gr -P<printer name> <filename>
Whenever necessary, you can dump an X window directly to a printer using the following command and clicking on the window you want to print.
athena% xdpr -P<printer name>
Homework that is turned in late will be penalized as follows:
- If turned in one day late i.e. by 2:30 p.m. on the day after the due date, the penalty is 10% of the overall score (i.e. 10 points off). You can turn in the problem set solution only once. The first hardcopy you will turn in will be the one to be graded, i.e. if you plan to submit it late you should not submit a solution on the due date, as well, but only the late one the day after the due date.
- If turned in more than one day late, the penalty is 100% i.e. no credit will be awarded. No exceptions!
Please, always staple your hardcopies together and write clearly on the first page your name and username. Also, type within a comment at the top of each file you submit the following information:
- your first and last name
- the problem set number, and
- the question number
2. Compilation
After writing the source code of a program, e.g. using an editor such as emacs, you must compile it, which translates the source code into machine instructions. For the C++ programs you need to use the GNU g++ compiler. To be able to use this compiler, you need to add its locker using the following command (on the Athena prompt):
% add -f gnu
You can customize your account, using a dotfile, so that it will automatically add the gnu locker at start-up. In the file .environment you need to add the following line:
add -f gnu
The add command attaches the specified locker to your workstation and adds it to your path. Dotfiles, such as .environment and .cshrc.mine, can be used to set environment variables, shell aliases and attach lockers in order to get the desired working environment. In the .environment dotfile you may also put the following lines to avoid typing them every time you log in:
add infoagents
add 1.124
setenv CVSROOT /afs/athena.mit.edu/course/1/1.124/src
You can check if you properly use the GNU compiler by giving the following commands:
_% which g++ _ which should give you something like:
/mit/gnu/arch/sun4x_55/bin/g++ or /mit/gnu/arch/sgi_53/bin/g++
Then, you can use the GNU compiler to compile a C++ source code file. For example, to compile and link the source code file ps0_1.C, which is provided in PS0, you can use the following command:
% g++ ps0_1 .C
Then, you can run the generated executable file, which is by default named a.out, by typing its name at the Athena prompt.
To give a specific name to the generated executable the -o option must be used:
% g++ ps0_1 .C -o ps0_1
Then, the generated executable file is named ps0_1
Sometimes, you may need to include an external library, e.g. the math library, using the -l option as follows, and the name of the external library (below the math library is included using m after -l) you want to include (in addition to including its header file in your files):
% g++ ps0_1 .C -o ps0_1 -lm
In some cases that you only need to compile a file without linking, i.e. to generate only the corresponding object file (machine language versions of the source code) and not the executable, you need to use the -c option. (In the following example a ps0_1.o will be generated)
% g++ -c ps0_1 .C
You also need to use the flags -ansi -pedantic to enforce the rules of ANSI Standard C++. In addition, it is useful to use the -Wall option to get all warnings, e.g.:
% g++ -ansi -pedantic -Wall ps0_1 .C -o ps0_1 -lm
It is more convenient to set an alias (e.g. c++), instead of typing all this every time. In particular you can add in your .cshrc.mine dotfile the following:
alias “c++” “g++ -ansi -pedantic -Wall -lm”
and then simply use the following command to compile and link a program:
% c++ ps0_1 .C -o ps0_1
In addition, you can use makefiles to help you automate the compilation and linking of your programs. The following command creates the target_filename according to the instructions provided in the makefile make0a.
% gmake -f makePS0a ps0_1
Eventually you must learn to use makefiles since they are extremely useful for the development of large programs with several different source code files.
Although you may work at any machine and using any compiler you want, you need to make sure that your code compiles properly using the GNU compiler on an Athena workstations. The graders will be using Athena workstations and the GNU compiler to check your solutions.
3. Debugging
Using a debugger can help you find logical and difficult to detect errors in your code much faster and easier. A debugger can be used to step through the program, line by line and examine the program variables while executing it. Therefore, you need to first correct all syntactical errors, using the messages from the compiler and then execute the program using the debugger to detect potential logical errors. Typically, you compile the program, identify errors using a debugger, correct it in emacs, compile and debug again, and repeat as often as necessary.
The debugger allows you to examine in detail what is happening during a program execution, or when it crashes due to a run-time error. In order to be able to use a debugger, such as gdb and ddd, you must first compile and link your code with the flag -g. e.g.:
% g++ -g ps0_1 .C -o ps0_1
- **gdb debugger:
**
The g++ -g ps0_1 .C -o ps0_1 command generates an executable filename that can be checked with gdb, e.g. using the following command for the program compiled above:
% gdb ps0_1
You can find more information about GDB at Debugging with GDB - The GNU Source-Level Debugger.
-
**ddd debugger:
**A user friendlier debugger, available on athena, is the Data Display Debugger (ddd), which uses the gdb for its operations. The ddd program is available in the outland locker. Since we also need the g++ compiler from the gnu locker for the compilation you need to type:
% add outland
_% add gnu_To invoke the ddd debugger with your executable program, e.g. for ps0_1, please, type:
% ddd ps0_1 &
You can see that three windows pop up:
- The main window, which has three main parts.
- the top panel with the menu bar and the tool bar
- the middle panel with your source code, which is a read only panel code here.
- the bottom panel with the (gdb) prompt, which is the debugging console
- Debugger command tool window, titled ddd
- A tip window, which often provides useful debugging tips. You can close this window.
In general, the debugging cycle involves stepping through your program carefully once it compiles and seems to run. In particular, step into each function that you wrote; step over, e.g. using the ’Next’ button, system functions. At each step, ’Print’ the variable value(s) computed and check them for reasonableness. Keep going until you find logical errors. Then, make the proper correction, using the editor, recompile and do this again.
The execution of the program can be controlled from the ddd command tool which has the Run, Interrupt, Step, Next etc. buttons.
- To start debugging:
- Find the first executable (non-declaration) line in the program
- Place the cursor on that line (mouse click)
- Then click on the ’Break’ button in the tool panel at the top of the main window. This will set a breakpoint at that line. A breakpoint stops program execution. You should see a stop symbol at that line.
- Click on ’Run’ button in the ddd window.
- This starts program execution, which will run until the next breakpoint or the end. The program will run to the first breakpoint and halt.
- To step through the program:
You can step through the program using the buttons on the ddd window. A green arrow will indicate the current program statement being executed in the source code panel.
-
- clicking on the ’Next’ button steps over function calls and goes to the next line of the current function
- clicking on the ’Step’ button steps into any function call on the current line
- clicking on ‘Finish’ will finish executing the current function. You can use ’Finish’ to come out of any system source code (cout for example) you ’Step’ into
-
Looking at variable values:
- To examine normal variables you can:
- move the mouse on top of a variable and a pop up box will show the variable’s value
- click on the variable so that it becomes highlighted, and then click on the ’Print’ button in the toolbar. Then, the variable’s value will appear in the console
-
To examine arrays:
- moving the cursor on top of an array, either the memory address of the first element is shown if values have not been set, or if values have been assigned to the array, the array values are shown
- highlighting the array name the array name will appear in the text box present in the tool bar. If ’Print’ is clicked now, the memory address is printed in the console.
- adding a * before the array name and clicking ’Print’ will print the value stored in the first element of the array. If you wish to see all the elements in the array, replace ’*arrayName’ by ’*arrayName@arraySize’ in the text box and then click ’Print’.
-
To display a variable:
You can continuously see the values stored in a variable, by displaying it instead of printing it. The displayed variable will be shown in a new panel which will pop up above the source code panel. As you step through the program, any changes to the variable’s value will be shown there. You can display a variable by:
-
highlighting a variable and clicking on the ’Display’ button on the tool bar. Its value is updated every time it changes.
-
To undisplay a variable.
-
Right click on the variable’s box and choose ’undisplay’.
-
To display values of all local variables in the current function:
- choose “Display local variables” from the ’Data’ tab in the menu bar of the main window.
- choose “Display local variables” from the ’Data’ tab in the menu bar of the main window.
-
To display the arguments passed to the current function:
- choose “Display function arguments” from the ’Data’ tab in the menu bar of the main window.
- choose “Display function arguments” from the ’Data’ tab in the menu bar of the main window.
-
To input and output:
- Input and output is done through the debug console.
- cout line usually found before cin will display a prompt
- stepping to a cin enter your input in the bottom window in the blank line at the bottom
-
Resources:
You can learn more about ddd from the DataDisplayDebugger web-page.
You can look at the ddd manual as well.
4. Use of makefiles
In some cases a makefile (named make<number_of_problem>) will be provided, and you may use it to compile and link your code. Makefiles are used to automate the compilation and linking of programs. To compile and link a specific program, assuming that a proper makefile is available, the following command is used:
% gmake -f make_file_name <program_name>
You do not have to use the provided makefiles, but you can use instead your own makefiles, or any of the makefiles you have seen in the lectures or anywhere else. You need to turnin the makefile that you use to compile your files on athena (either the ones you got using CVS or your own) Learning to use makefiles will help you when you start writing and compiling larger programs with several files, which makes the use of makefiles necessary.
In problem set # 0, a simple makefile is provided for you, called makePS0a, which you may use to compile and link your code. There is also a more advanced makefile named makePS0b which you can use.
e.g. athena% gmake -f makePS0a ps0_1
Executing the above command creates the target filename ps0_1, according to the instructions provided in the makefile makePS0a.
5 . Concurrent Version Control (CVS)
For the development of large software packages and programs it is useful to use a control system for modifications and revisions. Although it may not seem very useful for the development of small simple programs (like your first homework problems) it would be very useful for your project, and you will benefit from getting used to using it. Therefore, it would be beneficiary for you to get used to using such a revision control system as CVS (Concurrent Versions System). You may obtain more information on CVS from the man command (% man cvs) and from the following URLs:
- cvs - Concurrent Versions System
- Concurrent Versions System - Tutorials
- CVS Index
- Concurrent Versions System - The Open Standard for Version Control
The provided source code files are in the directory /mit/1.124/Problems/<Problem set number> from where you can copy them using CVS to your directory, and, make the necessary additions and/or modifications. To use CVS to check out the problem sets for the 1.124 you should first set the environment variable CVSROOT as below: (you can also put it in your .environment dotfile)
% setenv CVSROOT /afs/athena.mit.edu/course/1/1.124/src
then you can use the command:
% cvs co Problems/PS<Problem set number>
or, using the alias defined in 1.124/src/CVSROOT/modules
% cvs co OOP_PS<Problem set number>
6. Introduction to C++
C++ is both a procedural oriented programming language and an object oriented programming (OOP) language, since it allows you to organize your program not only around functions (procedures), but also and most commonly used, around data.
A procedural language is based on a list of instructions (statements) organized in procedures (functions) and emphasizing the computations to be performed. Languages such as Fortran and C are procedural languages. C++ which is based on C, has many additional features that enable object oriented programming, where emphasis is given on the data and their behavior. Java® is a pure object oriented language.
The additional features and advantages of C++ are the following:
Classes allow the programmer to create her/his own data types extending the capabilities of the language according to the physical world problems. Classes are similar with the data structures in C, which are also available in C++. However, within a class both data variables and member functions that are used to work with the data variables can be provided.
class Complex
{
public:
double real;
double imaginary;
};
main()
{
Complex x;
x.real =15.5;
x.imaginary = 2.5;
}
There are many additional features, like inheritance and virtual functions, added to C++ which are related with the classes and provide simple ways to handle objects and develop programs in an object oriented programming style.
C++ allows reusability since a class which has been written, debugged and checked can be distributed to other programers and with minimal effort be incorporated in several programming packages.
C++ supports function overloading which allows the use of functions with the same name, as long as they have different signature. The signature of a function is considered its name and the number and type of its arguments. e.g.:
int min(int x, int y) { }
double min(double x, double y) { }
In addition, virtual functions and polymorphism allows the dynamic binding on functions during run-time instead of static binding during compilation.
C++ allows operator overloading, i.e. to use operators with user defined data types according to a specified function associated with the specific operator. For example, we are able to add two complex numbers which are a user defined data type, as long as we provide the necessary functions for operator overloading.
main()
{
Complex x,y,z;
……..
z = x + y ;
}
In C++, there are two ways to comment, // (which comments everything until the end of line), and /* */ (which comments everything between /* and */.
Two latest features of C++ are the templates and the exception handling. The templates allow us to parameterize the types within a function or a class providing a general definition that can be used for many different purposes. The exception handling mechanism provides a mechanism to respond to and handle run time errors (like division by zero, exhaustion of memory, etc.).
7. Data Types
- Boolean: bool
- Character and very small integers: char
- Integers: **short int, int, long int ** (short, int, long)
- Floating (single, double and extended precision): float, double, long double
The bool data type can be assigned the values true and false, which correspond to 1 and 0, respectively.
A char can be used both as a character and as an integer, depending on how it is used. Therefore, char, short, int, and long are all called integral data types.
There are also unsigned versions of integer types, which allow the increase of the range of the larger number that can be stored with them, by not using any bit for the sign: unsigned char, unsigned short int (unsigned short), unsigned int, unsigned long int (unsigned long).
The reason of using different data types is mainly for memory efficiency, since we can use the data type which correspond to our needs avoiding useless waste of memory. The proper data type must be selected and used based on the expected requirements during the program’s execution.
To refer to particular variables that correspond to a particular chunk of memory in C++ (and in any other programming language) we have to use identifiers (variable names). The variable names in C++ are case sensitive as they are in C, must begin with a letter or an underscore (_), and should not be a reserved keyword. You should use reasonable variable names that provide some meaning to the reader of your code.
Using the above keywords we can declare the data type of a variable giving to the compiler information about the necessary amount of memory required to store (i.e. the memory that is required to be allocated and reserved for) the variable and which is machine dependent, e.g.:
- int i, j ;
- double x,y,z;
- long int k;
- unsigned short int i;
The const type modifier (or qualifier) defines a variable as a symbolic constant and does not allow any change of its value. Therefore, a const variable must be initialized when defined, since any attempt to change its value results in a compile-time error.
const int i=10;
Note the different meaning of the following declarations:
const int *p: Pointer to a constant integer
int *const p: Constant pointer to an integer
const int *const p: Constant pointer to a constant integer
8. Variable Declarations and Definitions
Before using any variable we have to declare it, i.e. inform the C++ compiler what is the data type of the variable. Every variable has a certain data type that determines the storage requirements and the operations that can be performed on it. In C++, as in C, we can combine several separate variable declarations into one declaration, as long as each variable is of the same data type, e.g.:
<data_type1> <variable1_name>;
<data_type2> <variable2_name> = <initial_value>, <variable3_name>;< /EM >
int a, b, c;
double x,y ;
float z ;
The above declarations are also definitions. A declaration simply informs the compiler that the variable exists, its data type and that it is defined somewhere else in the program. C++ allows to have many declarations of the same variable as long as they are consistent. The definition of a variable defines the variable’s name, its data type and may also initialize the variable. Defining a variable informs the compiler about the variable’s data type so as to reserve the proper amount of memory to store values for that variable. The difference is that the definition reserves memory for the variable. Therefore, there must be only one definition of a variable in a program. A declaration is also a definition if it also sets aside memory at compile time.
For example, the following statement is a declaration because it informs the compiler that an external global variable will be used, but no memory is allocated for that variable. The memory is allocated at the definition of the variable.
extern int x_limit ; // declaration
We can also initialize a variables in its definition, assigning an initial value and this is called initialization. When a value is assigned to an already defined variable this is called assignment :
int a, b, c; // definitions
_double x = 3.4, y(4.);< /EM > // definitions and initializations
_ float z = 9.9;< /EM > // definition and initialization
c = 10; < /EM > // assignment
9. Operators
You will often need to use operators which perform a specified action on their operands. Most operators are binary, i.e. they have two operands, e.g. a+b. The operators in C++ are the same as the ones used in C.
These are:
- the arithmetic operators of C++ are the +, -, *, / and %
- the assignment operator is the =
- the shorthand (abbreviated) assignment operators: += , -= , *= and /=
- the (unary) postfix and prefix increment/decrement operators ++ and –
- the relational operators are: > , < , >= , and <= (used to compare two expressions)
- the equality operators are: == and != (used to check two expressions for equality)
- the logical operators are the: && , || , and !
An assignment expression has the value that is assigned to the variable on the LHS, and, therefore, several variables can be assigned the same value in one statement,
e.g.: x = y = z = 100;
The RHS of logical operators is executed only if its necessary for the decision that must be taken. e.g. if the LHS of a ‘logical and’ (&&) is false, there is no reason to examine its RHS.
In addition, in C++ it is allowed to create new definitions for operators applied to user defined data types.
main()
{
Point x,y,z;
……..
z = x + y ;
}
10. Expressions and Statements
Each C++ program must contain a function named main(), as in C. The execution of a C++ program begins from the first statement of main and finishes when the last statement of main is executed (e.g. when the last curly brace of main is reached).
An action in C++ is referred as an expression, while an expression terminated by a semicolon is called a statement. More than one statements enclosed in a pair of curly braces is called a compound statement.
Precedence and associativity: The sequence (order) with which individual components of an expression in C++ are executed is based as in C on the order of precedence. When the operators have the same precedence the associativity defines the order of execution. A table with the precedence and associativity of the C++ operators is provided. Similar tables you can find in any C++ textbook.
e.g. a = b += 5 + 3 / 2 - 5 + 17 / 2 / 3
(order): (8) (7) (4) (1) (5) (6) (2) (3)
The following table provides the precedence and associativity of the C++ operators with the highest precedence being the operator :: having precedence level equal to 1.
Conversions: C++ defines a set of standard conversions, implicit type conversions, that are used in arithmetic conversions, assignments using different data types, and passing arguments to a function of different data types than the function parameters. In particular, when we have such mixed expressions the compiler makes some standard conversions, e.g. in binary operations the lower data type is promoted to the higher one (which dominates), so as to avoid losing information.
The following order is used:
bool, char, short int < int < long int < float < double < long double
_
bool_, char and short int are always converted to int whenever they appear in any expression, i.e. before performing any operation on them. A bool is promoted to int, getting the value 1 or 0, depending on its value (true or false, respectively). When a number is converted to a type bool all values other than zero are converted to true and a zero value is converted to false. Integer constants (e.g. 17) are considered int, and, floating point constants (e.g. 4.53) are considered double. We can use L after an integer and a floating point constant to define that should be considered as a long int, and, a long double, respectively.
In C++, we can also define rules of conversions to be used with operators applied on user-defined data types.
We can also explicitly define type conversions using casting to force the explicit conversion from one data type to another.
_(dataType) variableOrExpression ; _ and dataType (variableOrExpression) ;
Another way do an explicit conversion, i.e. to cast a data type constant or variable to another data type, is using the keyword static_cast followed by a data type name surrounded by angle brackets and the certain variable or constant to cast within parentheses, e.g.
_
static_cast <dataType> (variableOrExpression)_
_static_cast <float> (5) / 3 _ // gives 1.66667
/* Example: Mixed Expressions - Precedence - Associativity - Casting */
#include <iostream.h>
main()
{
int i=4 ;
float f = 2.5 ;
double d = 3;
cout << “\n i / 5 * f = " // Mixed Expressions
<< i / 5 * f << endl ;
cout << “\n ‘a’ = " << ‘a’ << endl ;
cout << “\n ‘a’ - 1 = " // Mixed Expressions
<< ‘a’ - 1 << endl ;
cout << “\n ‘f’ - ’d’ = " // Mixed Expressions
<< ‘f’ - ’d’ << endl ;
cout << “\n f + 5 * d = " // Precedence
<< f + 5 * d << endl ;
cout << “\n f * 2 * 2.5 = " // Associativity
<< f * 2 * 2.5 << endl ;
cout << “\n (float) i / 5 * f = " // Casting
<< (float) i / 5 * f << endl ;
cout << “\n float i / 5 * f = " // Casting
<< float (i) / 5 * f << endl ;
cout << “\n static_cast <float> (5) / 3 = " // Casting
<< static_cast <float> (5) / 3 << endl;
}
Results
i / 5 * f = 0 float 0.0
‘a’ = a character
‘a’ - 1 = 96 int 96
‘f’ - ’d’ = 2 int 2
f + 5 * d = 17.5 double 17.5
f * 2 * 2.5 = 12.5 double 12.5
(float) i / 5 * f = 2 float 2
float(i) / 5 * f = 2 float 2
static_cast <float> (5) / 3 = 1.66667
11. Input/Output Operators
In C++ the predefined objects cin, cout and cerr are available for input and output operations. The predefined object cin refers to the standard input (which is by default the keyboard), cout and cerr refer to the standard output and the standard error, respectively (which are both by default the display). These defaults can be changed using redirection while executing the program.
The output operator (<<) (known as insertion operator) directs (display) output information on your standard output (screen), e.g.
cout << “\n x = " << x << endl ;
“\n” represents a new line character, while endl inserts a new line and flushes the output buffer. The operating system buffers the output to the display characters and prints them out in a batch to minimize I/O overhead. This may lead to wrong indications of where the error may be if we consider the printed out information without flushing the buffer.
We may have several output operators in the same output statement
Similarly, you can use the input operator (») to obtain (read) input values from the standard input (keyboard). e.g.:
cout << “\n x = " ;
cin » x ; cin » y » z;
The standard input-output library (iostream.h) must be included using a preprocessor directive, in order to be able to use the input and output operators, as well as the manipulators without arguments such as endl, flush, hex, oct, etc.
Certain options may be specified when using the output stream operator to select the way that the output should look. The precision can be set using a iostream manipulator, the setprecision(number_of_digits), while the setiosflags(options separated by |) can be used e.g. to specify whether the decimal point or tailing zeros should be shown. The setw() specifies the field width in which the next value should be printed out. It is the only manipulator that does not apply to all subsequent input or output, but becomes zero as soon as something is printed. The setfill(c) makes c the fill character. To use these parameterized stream manipulators (i.e. with arguments) we need to include the iomanip.h header file. e.g.:
cout << setprecision(2) << setiosflags(ios::fixed | ios::showpoint)
<< “\n\n 3. = " << 3. << “\t 0.333333 = " << 0.333333 << endl;
(will give: 3.=3.00 0.333333 = 0.33)
We can also redirect the input from the keyboard to a file and the output to another file:
athena% executable_file_name < input_file_name > output_file_name
Finally, there is a standard error stream cerr which is used to display error messages
cerr << “\n Not proper values were provided!” ;
The following member functions can be invoked by the input stream, cin: cin.good() returns true if everything is ok;, cin.eof() return true if EOF is reached; cin.fail() returns true if a format error has occurred.
The C input/output functions**scanf()/printf()** can also be used, since C is a subset of C++. In that case the stdio.h header file (which contains their prototypes) must be included. Then, the buffer can explicitly be flushed using “fflush(stdout);”.
However, when both C input/output functions and C++ input and output operators are used, you need to provide the following function call before doing any input or output, to avoid problems:
ios::sync_with_stdio();
12. Preprocessor Directives
The preprocessing takes place prior to the actual compilation. The preprocessor searches all files that are to be compiled and takes action according to the preprocessor directives. The preprocessor directives are the lines which begin with # (usually placed at the top of the source code file).
An include preprocessor directive results in the substitution of it, with the contents of the indicated file. i.e. the following include preprocessor directive:
#include <file.h>
is equivalent to typing the contents of the included file at that point.
There are two variations of the include preprocessor directive:
_#include <file.h> _ or #include “file.h”
The difference is that in the first case the preprocessor searches for the included file in the standard include directory, while in the second case it searches in the current directory. The latter case is usually used for the user written functions.
Another preprocessor directive is the #define which it can be used to associate a token string with an identifier, e.g.
#define PI 3.1415926
The preprocessor will replace PI wherever it appears in a file with the provided token. After the preprocessing finishes the compilation starts, in which the token is treated as a floating point constant.
Other preprocessor directives are the following: #ifdef, #ifndef, and, #endif They can be used for conditional compilation.
e.g. the …… statements will be skipped if _MY_HEADER_H has already been defined.
#ifndef _MY_HEADER_H
#define _MY_HEADER_H
……..
#endif
Also, while compiling a program we can define a preprocessor constant on the command line using the -D option followed by the name of the preprocessor constant and therefore certain parts of the code can be selectively excluded.
e.g. compiling the file with -DDEBUG_MODE option will consider the cout statement:
#ifdef DEBUG_MODE
cout << “\n testing debug mode \n” << endl;
#endif
The define directive can also be used to specify macro substitutions with variable parameters. e.g. having defined the following macro using:
#define mult(x,y) (x*y)
then, the following statement: product = 267.2 + mult(25.7, 33.6)
will be replaced during preprocessing by: product = 267.2 + (25.7 * 33.6)
13. Header Files
To be able to use the input and output operators you must first include the standard input-output library (iostream.h) using a preprocessor directive:
#include <iostream.h>
When the C input/output functions scanf()/printf() are used the stdio.h header file (which contains their prototypes) must be included instead.
Similarly to be able to use any other standard library function you need to include its header file which contains all necessary declarations.
e.g. to be able to use the math function, such as sqrt() you need to include the math.h header file using the following command:
#include <math.h>
When the header file that you include is in the current directory, e.g. a header file that you wrote, then you should use double quotes, instead of
brackets, e.g.:
#include “myheader.h”
According to the new ANSI/ISO standard, which however is not followed by all available compilers yet, the iostream header file can be included using:
#include <iostream>
using namespace std;
int main()
{
std::cout << “\n testing:
pi = " << 3.1415 << endl;
}
Header files are very useful to provide declarations (e.g. for global variables and functions) in order to avoid incompatible declarations which may happen when multiple declarations are provided in several source-code files. In addition, any changes to a declaration would require only a single local modification instead of having to update all appearances of the declarations. A header file should never contain definitions, (unless its a definition of an inline function).
14. Control Structures
Control statements are used to control the flow of our programs which is normally sequential. i.e. statements are executed one after the other in order. Changing this sequential execution in a controlled way is called transfer of control and is achieved using control structures.
The C++ control structures are identical with those of C. The relational operators ( > , < , >= , <= ), equality operators ( == , != ), and the logical operators (&& , || , !) are used in logical tests which produce either true or false (0).
Typically, the following operators are used to form a logical test for the control structures which determines what action should be taken:
- the relational operators: > , < , >= , and <= (which are used to compare two expressions)
- the equality operators: == and != (which are used to check two expressions for equality)
- the logical operators: && , || , and !
The result of the above operators is of type bool, either true (i.e. 1), or false (i.e. 0). You should never compare floating-point values for equality or inequality, since the floating-point numbers can, in general, only be approximated since only a small number of digits are used for computer representation of values.
In all control structures, a simple (i.e. single), or a compound, i.e. a sequence of statements enclosed in curly braces, statement is either conditionally or repeatedly executed, based on a logical test.
The**if** and if-else, as well as the switch control structures are used to make certain selections
The simplest selection control structure is the if, where if the logical test is true (i.e. non zero), then the following statement (or statements in curly braces) is (are) executed. Otherwise the statement (or statements) is (are) skipped and the statement after the if control structure is executed.
if (logical test)
statement ;
if-else if-…-else control structure provides several alternative actions. (It is more efficient to put the most probable selection first to reduce the chances of multiple checks)
if (logical test) (if-else if -else provides alternative actions)
{
statements (executed if (logical test) is true)
}
else if (another logical test) (checked if (logical test) is false)
{
statements
}
else (executed if not any (logical test) is true)
{
statements
}
The**switch()** control structure is useful when there are many different selections. It consists of multiple selection cases and an optional default case. Only constant integral expressions can be checked in switch() cases. The controlling expression in the parentheses determines which of the cases should be executed and starts executing statements in that case continuing until the closing brace of the switch control structure, or until a break is reached. The break causes the program to exit the switch structure and execute the next statement after it.
switch (x)
{
case 1:
statements
break;
case 2: case ’b’: case ’B’:
statements
break;
case 3:
case ’c’:
statements
break;
……..
default:
………
}
while, do/while, and for are the repetition control structures of C++, i.e. are used when iterations are required.
The statements of the while control structure are executed repeatedly as long as the logical test is true, and, at each iteration when the closing brace is reached, control is passed back at the beginning of the while loop. The while loop is continuously repeated until the logical test becomes false (i.e. equal to zero)
while(logical test)
{
statements // statements executed repeatedly as
// long as the logical test is true
}
The do/while is similar to while with the only difference that its body is executed at least once since the check is done at the end.
do
{
statements // executed repeatedly as long as the logical test
} while(logical test); // is true but always executed at least once
The for control structure is used for repetitions, when we have a regular incrementing. First, expr1 is evaluated, which is usually used to initialize the loop variables. Then, the logical test (which is a loop continuation condition) is evaluated, and if it is true (i.e. nonzero), the following statements, within the curly braces, are executed. Finally, expr3 is executed (usually providing an increment or decrement of the control variable), and then the procedure from evaluation of the logical test is repeated, as long as it is true (i.e. non zero.) expr1 and expr2 can be comma separated lists of expressions, which are executed from left to right. All three expressions are optional, although the two semicolon are always required.
for (expr1 ; logical test ; expr3)
{
statements in body of for loop
}
Finally, the following control structure is the conditional operator which produces a value based on the logical test. Therefore, it can be placed inside another expression. The first expression after the question mark is executed if the logical test is true. Otherwise, the expression after the colon is executed.
( logical test) ? when_true_statement : when_false_statement ;
e.g.: max = (x>y) ? x : y ;
(i%2) ? cout << i << " is an odd integer” : cout << i << " is an even integer” ;
The break statement is typically used to skip the remainder of the switch statement. It is also used to exit repetition control structures (i.e. while, do/while and for). In all these cases execution continues with the first statement after the terminated control structure. The break statement exits the innermost loop, or switch statement.
The continue statement is used to skip the current iteration of a repetition control structure and continue with the next iteration, if there is one, i.e. the current iteration only is terminated and execution continues with the evaluation of the logical test of the next iteration. It goes to the next iteration of the innermost loop.
|
common_crawl_ocw.mit.edu_74
|
These notes were prepared by Petros Komodromos.
Contents
- Functions: declarations, definitions, and, invocations
- Inline Functions
- Function Overloading
- Recursion
- Scope and Extent of Variables
- References
- Pointers
- Function call by value, References and Pointers
- Pointers to functions
- 1-D Arrays
- Strings as arrays of char
- Arrays of pointers
- 2-D and higher dimensions arrays
- Return by reference functions
- Dynamic memory allocation
- The sizeof operator
- Data structures
- Introduction to classes and objects
1\. Functions
A function is a block of code which performs specific tasks and can be called from another point of our program. After the execution of all statements of a function, control returns to the point from where the function was initially invoked, and the next executable statement is executed. With functions we can organize components of a program into sub-units using procedural abstraction. This allow us to break a complex problem into several small subproblems that can be handled easier by separate blocks of code.
main() is a special function that is invoked whenever the program is executed. Upon its return the execution of the program is terminated. It typically returns an int, which by convention is equal to zero when there are no problems. In contrast, a nonzero value indicates an error.
The following three steps are required to use a function:
(i) Function declaration or prototype (optional): It informs the compiler that the specified function exists and that its definition appears somewhere else in one of the source code files. In particular, it specifies its name, and parameters. By providing information about certain characteristics of the function, before invocation statements, the compiler is able to make checks for possible inconsistencies in function calls.
The function declaration specifies the function name, which should follow basic naming restrictions, the number and data types of the arguments to be passed to the function and the data type of the returned value. The keyword void is used when the function does not return a value or/and has no arguments. It is also allowable to use empty parameter list, i.e. empty parentheses, but according to the Standard C++, the return type must be explicitly specified.
The name and the data type of the parameters of a function constitute the function signature which uniquely identifies the function.
The declaration is optional if the definition appears before any function call. However, it is a good practice to always provide function declarations. They should preferably be provided in a header file, which can be included whenever it is necessary. A function can be declared multiple times in a program, and, therefore, a header file with declarations can be included several times as well.
returnType functionName(param1Type param1Name, param2Type,…..);
The return type of a function can be any data type, either built-in or user-defined. It is useful, although optional, to provide the names of the arguments for documentation purposes. A function can return a single value, which can also be a data structure, or an object, when it returns back at the invocation point. If no value is returned then the return type of the function should be defined as void.
e.g: int fun1(double, int);
void fun2( double x, double y);
float fun3(void);
(ii) Function definition: The function definition consists of the function definition header and the body of the function definition. Although the definition header looks like the declaration, the names of the local parameters are required. Inside the function body the provided statements are executed. More specifically, a function definition consists of 4 parts:
a return type
a function name
an argument list
the function body
A function declaration is similar to the header of the function definition, providing the return type, name, and parameter list of the function. The function definition must appear once, except in the case of an inline function, providing the body of the function enclosed in curly braces.
When passed-by-value is used, the parameter names are associated with the values passed during function invocation, and they are actually local variables whose memory is released upon exiting the function. When a return statement is encountered control returns to the point from where the function was invoked.
returnType functionName(par1Type par1Name, par2Type par2Name, …..)
{
function body
}
e.g: void fun2(float x, double yy)
{
cout << “\n x+y = " << x+yy;
}
(iii) Function calling (or, function invocation): A function is invoked by writing its name followed by the appropriate arguments separated by commas and enclosed in parentheses:
function_name(arg1, arg2);
If the data types of the arguments are not the same with the corresponding data types of the parameters of the function then, either an implicit conversion is done, if possible, or a compile-time error occurs. In case of an implicit conversion in which accuracy may be lost, e.g. converting a double to an int, a warning should be issued during compilation.
Functions work with copies of the arguments when they are called-by-value (i.e. without using references). Upon entering the function memory is temporarily allocated in order to store the passed values. After the function has been executed, the control returns to the calling function and the values of the local variables and parameters that are called-by-value are lost, since the corresponding memory is no longer reserved. The only exception is when we deal with a static local variable.
/* Example on functions */
#include <iostream.h>
#include <stdio.h>
double get_max(double x , double y);
void print_max(double x , double y);
main()
{
double x=11, y=22 ;
cout << “\n Max = " << get_max(x,y) << endl;
print_max(x,y);
}
double get_max(double x , double y)
{
if(x>y)
return x;
else
return y;
}
void print_max(double x , double y)
{
if(x>y)
cout << " Max = " << x << endl;
else
cout << " Max = " << y << endl;
}
Output
Max = 22
Max = 22
Default Arguments of a Function
Some, or all, arguments of a function can be specified and used in case only some, or no, arguments, respectively, have been provided at the function call. Whenever less than the total number of expected arguments are provided, the specified defaults values are used for the corresponding right-most arguments, i.e. the provided values by the function call are used for the left-most parameters and the remaining parameters on the right take their specified default values. Therefore, all parameters without default values must appear first in the parameter list.
/* Exaple on default arguments to a function */
void fun(double a=11.1, int b=22, double c=7.6);
int main(void)
{
fun();
fun(34.9);
fun(5.6, 3);
fun(12.4, 3, 19.5);
return EXIT_SUCCESS;
}
void fun(double a, int b, double c)
{
cout << " a = " << a << " b = " << b <<” c = " << c << endl;
}
Output a = 11.1 b = 22 c = 7.6
a = 34.9 b = 22 c = 7.6
a = 5.6 b = 3 c = 7.6
a = 12.4 b = 3 c = 19.5
2. Inline Functions
Using functions saves memory space because all the calls to a particular function execute the same code, which is provided only once. However, there is an overhead due to the extra time required to jump to the point that the instructions of that function are provided and then return back to the invocation point. In some cases, where very small functions are used repeatedly, it may be more efficient to have the code incorporated at the point of the function call instead of actually calling the function, so as to avoid the associated overhead. Inline functions can be used for this purpose. An inline function is a regular function with a suggestion to the compiler to insert the instructions at the points where the function is called.
A definition of a function as inline suggests to the compiler to expand the body of the function at all points where it is invoked. An inline function is expanded by the compiler during the compilation phase and not by the preprocessor (during macro substitutions). This is useful for very short functions which may result in a computational overhead whenever they are called. Inline functions eliminate the overhead of function calls while allowing the usage of procedural abstraction.
A function is defined as inline using the keyword “inline” before its header. The inline directive should be specified in the function definition, rather than in its declaration.
e.g.:
inline max(double x, double y)
{
…….. // statements
}
The compiler may, or may not, expand the function at its invocation points to avoid this overhead. However, it is necessary that the compiler sees the function definition, and not just the declaration, before it encounters the first function call, so as to be able to expand it there.
A disadvantage of inline functions is that if an inline function is modified, then all the source code files that use that function should be recompiled. This is necessary because, if the compiler follows the inline suggestion, the body of an inline function is expanded at any point where the function is called. In contrast, a non-inline function is invoked during run-time. Also, the inline function must be defined in every file that it is used, with identical definitions. an alternative way to avoid having multiple copies of the same function definition is to have the function definition in one file which can be included in all source code files that make use of the inline function.
3. Function Overloading
C++ allows function overloading, i.e. having several functions with the same name, in the same scope, as long as they have different signatures. The compiler can distinguish which one to actually invoke based on the data types of the parameter list of each one, which should be different, and the provided arguments. The only overhead from using overloaded functions is during compilation, i.e. there is no effect during run time.
The signature of a function is considered to be its name and parameters, specifically their number and data types. The return type of a function is not considered part of a function signature. When an overloaded function is called, the compiler selects the matching function among those with the same name using the data types of the provided arguments. The process in which a function is selected among a set of overloaded functions is called function overload resolution. The latter process follows certain rules based on the arguments provided at a function call. Briefly, the function overload resolution process first identifies all candidate functions, i.e. all visible functions with the same name. Candidate functions are selected based on the argument data types at the function call and considering possible conversions. Finally, the best matching function, if any, is selected among the remaining candidates based on how good the required conversions are.
/* Example on function overloading */
int min(const int x, const int y) ;
double min(const double x, const double y) ;
main()
{
int x,y,m;
x=3;
y=7;
cout << “\n min(x,y) = " << min(x,y) << endl;
double z=4.5, w=2.34 ;
cout << " min(z,w) = " << min(z,w) << endl;
}
int min(const int x1, const int x2)
{
if(x1<x2)
return x1;
else
return x2;
}
double min(const double x1, const double x2)
{
if(x1<x2)
return x1;
else
return x2;
}
Output
min(x,y) = 3 // int min(const int x1, const int x2) has been called
min(z,w) = 2.34 // double min(const double x1, const double x2) has been called
4. Recursion
An algorithm can have an iterative formulation, a recursive function, or both formulations combined. A function is said to be recursive if it calls itself. Each time that a function calls itself a new set of local variables is created. This set is independent of any other local variables created in previous calls, assuming that all calls are by value and no static local variables are used.
Typically a recursive function involves a recursive call to itself with a smaller problem to solve. This is the ‘divide-and-conquer’ approach which decomposes a problem into smaller ones with similar characteristics with the original problem. A recursive function should have, at its beginning, a base case that is tested to determine whether the recursive calls should be terminated.
The following two functions can be used to compute the factorial of a number. First, an iterative version is provided, followed by a recursive one.
_int factorialIterative(int n) _ // iterative version
{
int result=1;
while(n>1)
result *= n–;
return result;
}
int factorialRecursive(int n) // recursive version
{
if(n==0)
return 1;
return n*factorial_Recursive(n-1);
}
5. Scope and Extent of Variables
The scope of a variable is where in the program the variable is accessible and therefore it can be used, i.e. the scope defines where the variable can be used, or assigned. In general, variables are accessible only in the block in which they are declared. In C++ there are 3 different scopes: the local scope, the namespace scope, and the class scope.
A variable defined inside a function is a local (or automatic) variable. The scope of a variable that is defined in a compound block inside a function is limited inside that block. Local scopes can be nested. However, the parameters of a function have local scope and cannot be redeclared inside the function’s local scope, or inside any other nested local scopes in the function.
An entity that is defined outside any function or class definition has a namespace scope. User-defined namespaces can be defined using namespace definition, as we will see in a later recitation. For now we will consider only the global scope, which is a specific case of the namespace scope. In particular, the global scope is the outermost scope of a program. A global variable (or function) is a variable defined outside of all functions.
The scope of a global variable is universal, since it is accessible by all statements after its declaration or definition. In contrast, the scope of a local variable is local since it is accessible only inside the function in which it has been declared.
A global entity (i.e. variable, object, function) can be declared several times in the source-code files of program, while only one definition can appear. The only exception is inline functions that can have several definitions, identical however, one at each source code file. A global variable can be declared many times using the keyword extern to indicate that somewhere else (in another source code file) that variable is defined. Only one variable definition must be provided and it is only then that memory is allocated for the variable. If a global variable is not initialized at its definition it is automatically initialized to zero.
In addition, in C++, the user defined data types (classes) allow us to specify certain permissions for the access of data members of the objects we define using the classes we develop.
The extent or lifetime of a variable is the duration for which memory is reserved for it. The memory allocated for parameters and local variables is in general reclaimed upon returning from the function, and therefore, they have dynamic extent. In contrast, the global variables have static extent since the memory allocated for them is never reallocated. Local variables can also have static extent if the keyword static is used when they are defined.
There are 4 storage classes:
Automatic: Local variables, or objects, are local variables with dynamic extent, unless they have been defined as static. Memory is allocated for automatic variables and objects from the run-time stack upon entering the function and is automatically deallocated, i.e. released, upon exiting the function. If an automatic variable is not initialized then its value is unspecified since the allocated memory contains random information.
External: Global variables which, unless they have been defined as static, are accessible from any part of the code, i.e. in any file as long as either the global variable definition (and this can occur only once), or a declaration using the keyword extern (this can occur several times) is provided.
Static: Both local and global variables can be defined static using the static keyword. However, static local and static global variables are different.
Static local variables are initialized once, have extent, i.e. lifetime, throughout the program, but their scope, i.e. visibility, is limited to the function in which they are defined. Defining a local variable as static results in having static extent for that variable, i.e. the allocated memory for that variable is reserved until the termination of the program. Memory for a static local variable is allocated only once at the first time the function is entered and the memory with the currently stored value is retained and can be used during the next entry. An uninitialized static local variable is by default initialized to zero.
_
Static global variables_, or functions, are global variables, or functions, that are not accessible outside the file they are defined, i.e. their use is restricted only within the file they are defined.
Register: This storage class is similar with automatic with the only difference that it is suggested to the compiler to keep these particular variables in some registers in CPU so as to save time when these variables are frequently used, e.g. frequently used variable in a loop. An automatic variable can be declared to have register storage using the register keyword, e.g.: register int i;
Access of variables: A name resolution process determines, during compile time, to which particular entity, i.e. location in memory, a particular name (of a variable, function, object, etc.) refers, considering the provided name and the scope in which it is used.
Global Variables: variableName or :: variableName
Local Variables: variableName
Object Member Variables: object.variableName or point_obj -> variableName
or this -> variableName
If inside a function a local variable has the same name with a global variable, then the local variable hides the global. In C++ we can access the global variable using the scope resolution operator ::var, access the global var variable.
/* Example on scope and extent of variables */
#include <iostream.h>
extern double y; // external variable (defined in another file)
static double x=25.5; // static global variable
void fun(double x);
int main()
{
int x=3; // local variable
fun(x);
fun(::x);
fun(x);
}
static void fun(double x)
{
static int s=0; // static local variable
int n=0; // automatic (dynamic local) variable
cout << " n = " << n << “\t s =” << s
<< “\t x = " << x << endl;
}
Output
n = 0 s =0 x = 3
n = 0 s =0 x = 25.5
n = 0 s =0 x = 3
6. References
A reference serves as an alias (i.e. a nickname) for the variable or object with which it has been initialized. A reference is defined using an address-of operator (&) after the data type. References are typically used when a function is called, as an alternative to pointers, in order to be able to work on the actual variables that are used as arguments when calling the function, rather than with their values.
When a parameter of a function is defined to be a reference then that variable is said to be passed-by-reference, rather than by value. Since all operations on a reference are actually applied to the variable that it refers, the only way to assign a variable to a reference is during its definition, i.e. a reference is assigned a variable to which it refers during its definition. It is also possible to have a reference to a pointer as shown in the following section example.
e.g: double x=15.75, &rx= x; // rx is a reference to a double
_rx += 20; _ // x becomes equal to 35.75
7. Pointers
A Pointer variable is used to hold an memory location address. Every pointer has an associated data type that specifies the data type of the variable, or, in general, object, to which the pointer can point. Since pointers are variables that are used to store addresses of other variables, they must be defined before being used. A pointer is declared as a pointer to a particular type using the dereference operator (*) between the data type and the name of the pointer. Using the address stored by the pointer, the variable stored in that address can be indirectly manipulated.
e.g.: double *px, *py, x, z *pz ; int j, *pj, k ;
The memory storage allocated for a pointer is the size necessary to hold a memory address (usually an int). The address of a variable (i.e. the location in memory where it is stored) can be obtained using the address-of operator (&), e.g. &x gives the address (i.e. the memory location) where the variable (here x) is stored. A pointer can be assigned the value 0 or NULL to indicate that it points nowhere. However, it is not allowed to assign to a pointer an address that is used to store a different data type variable.
The address of a variable is stored in a pointer using the address-of operator, by an assignment such as: px =&x;. The value which is stored in an address pointed to by a pointer can be accessed using the dereference operator (*). The value of a pointer is the address that it points to. Dereferencing a pointer gives the value which is stored at the memory location stored as the value of the pointer.
e.g. double *px, x;
px = &x ;
*px = 25.5 ; // this is equivalent to assigning x=25.5
Pointers can be used in arithmetic, assignment, and comparison expressions. An integral value can be added to, or subtracted from, a pointer. According to the rules of pointer arithmetic, when adding (or subtracting) a value from a pointer, the new address memory that is assigned to the pointer is equal to the current memory address plus (or minus) the amount of memory required to store that particular data type whose address is stored by the pointer times the value that is added (or subtracted). A pointer can be incremented, decremented, or subtracted from another pointer. However, two pointers cannot be added, and, pointer multiplication and division are not allowed.
A special pointer that can be used to store any type of pointer is called a pointer-to-void, and can be defined using the keyword void as the data type. However, no actual manipulation of the contents of the address pointed to by a pointer to void can be performed directly. Since a pointer-to-void does not provide any data type information concerning the data stored in the memory at which it points, an explicit cast is required to specify the data type of the data stored there.
double x=10.75, *px=&x;
void *vp = &x;
cout << “\n x = " << x ;
cout << “\n *px = " << *px ;
// cout << “\n *vp = " << *vp ; // <——- Wrong!
cout << “\n *vp = " << *(static_cast <double*>(vp)) ; // ok
The following example demonstrates the use of both references and pointers:
/* Example on references and pointers*/
#include <iostream.h>
#include <stdlib.h>
int main(void)
{
int i=5, &ri = i ; // integer and reference to an integer
double x=24.5, *px=&x, *&rpx=px; // double, pointer and a reference to pointer to a double
ri++;
*px += 100;
*rpx += 1000;
cout << “\n i = " << i << “\t ri = " << ri;
cout << “\n x = " << x << “\t *px = " << *px
<< “\t *rpx = " << *rpx << endl;
return EXIT_SUCCESS;
}
Output
i = 6 ri = 6
x = 1124.5 *px = 1124.5 *rpx = 1124.5
8. Function Call by Value, by Reference and Using Pointers
Pointers and references provide ways to overcome the problems associated with the call-by-value. Often we need to change the values of variables within a function call and this is impossible using directly the call-by-value which passes only copies of the values of the provided parameters. These copies are lost upon exiting the function and only one value can be returned by the function. In addition, large user-defined objects are often passed as arguments to a function. In those cases, using call-by-value requires memory allocation and copying of the passing arguments to the corresponding parameters which can be too costly both in computational time and memory space. Finally, in some cases, we may need to return more than one value from a function.
Therefore, in many cases calling a function by-value does not help much. There are two alternative approaches, either using a call-by-reference, or sending with call-by-value the address of the objects that we want to pass as arguments and operate on them indirectly using pointers.
We can change variables in the function by passing them by reference in which case the local variables are aliases to the actual variable. To pass an argument by reference we need to declare this by using an address-of-operator (&) after the data type of the passing by reference argument. The function is then called by reference. When an argument is sent by reference, it means that an alias to the argument is used in the function to access the actual variable in the calling function. Then, all changes that take place are actually done on the variable that is used as an argument when the function was invoked. A reference serves as an alias for the object with which the function was called and is initialized once upon entering the function, i.e. a parameter that serves as a reference cannot refer to different variables or objects. In cases where we want to pass a large object, that we do not want to change, by-reference, in order to save the overhead from making a local copy to the function parameter, we can define the reference to be a const so as to prevent any accidental modifications of it.
An alternative way is to use**pointers** and pass the address of the variables which allows us to access the variables indirectly and change the values stored at those locations. This is indicated by an * operator after the data type of the argument that is passed as an address, since it is actually a pointer. Using addresses of variables as arguments to functions we can access indirectly and change the values of some variables of the calling function. In addition, memory needs to be allocated only for the pointer and not for the entire object, saving the time and space overhead of copying the arguments to the function parameters. In C++ an array is always passed by a pointer to its first element, i.e. it is never passed by value
The following example demonstrates the use of call by-value, by-reference and using pointers.
/* Example for call-by-value, call-by-reference and using pointers */
#include <iostream.h>
void fun(double x, double &y, double *z);
int main()
{
double x=11.1, y=22.2, z=33.3;
fun(x,y,&z);
cout << “\n x = " << x << “\t y = “
<< y << “\t z = " << z << endl;
}
void fun(double x, double &y, double *z)
{
x *= 2;
y *= 2; // using call by reference
*z *= 2; // using pointer to access the actual variable
cout << “\n x = " << x << “\t y = " << y << “\t z = " << *z << endl;
}
Output
x = 22.2 y = 44.4 z = 66.6
x = 11.1 y = 44.4 z = 66.6
9. Pointers to Functions
The name of a function is actually the address of the function in memory. A pointer to a function can be used as an argument to a function to allow us to selectively invoke one out of several different functions, depending on the name of the function we use as argument. To use a pointer to a function we need to declare it in the declaration and definition of the function, i.e. specify that the function accepts as an argument a pointer to another function. A pointer to function is defined using the function’s type, which consists of its return type and parameter list. For example, the following declaration declares that the function fun() has 3 arguments: an int, a pointer to a function that itself returns a double and has a float and an int as arguments, and a float.
double fun(int i, double (*f) (double, int), float);
A function name, in general, gives a pointer to that function, although an address-of-operator can also be used to get (explicitly) the same. A pointer to a function can also be initialized, or assigned a value. When calling the function to which the pointer points to, the pointer’s name, either by itself or dereferenced, can be used.
An example of a pointer to a function is presented below. The function compute() has 3 arguments: a pointer to a function, and 2 integers. The name of any function that returns a double and has two doubles as arguments can be provided in the function call of compute(). The provided function is then used inside the compute() function whenever f() is used.
/* Example of Pointers to functions */
#include <iostream.h> // pointers to a functions
double adding(double x, double y);
double subtracting(double x, double y);
double compute(double (*f)(double,double), int i, int j);
int main()
{
int x=7, y=3;
cout << “\n compute(adding,x,y) = " << compute(adding,x,y) << endl;
cout << " compute(subtracting,x,y) = “
<< compute(subtracting,x,y) << endl;
}
double compute(double (*f)(double,double), int i, int j)
{
return f(0.5*i,j); // The following is equally valid: return (*f)(0.5*i,j);
}
double adding(double x, double y)
{
return x+y;
}
double subtracting(double x, double y)
{
return x-y;
}
Output
compute(adding,x,y) = 6.5
compute(subtracting,x,y) = 0.5
10. 1-D Arrays
An array is used to store a set of values, or objects, of the same (either built-in or user- defined) data type, in one entity. An individual element of the array, i.e. a member of this set, can be accessed using the array’s name and an index which should be a value, or an expression, of integral type. The individual objects of an array are accessed by their position in the array using indexing with the index beginning from 0. Therefore, the last element of an n-size array has index equal to n-1. An array is defined using a pair of square brackets as shown below:
<data_type> <array_name> [size];
The dimension of the array, at the array definition, must be a constant expression, i.e. to be known during compilation, except in the case in which all elements of the array are explicitly initialized at the definition. If less elements of an array are initialized, according to the provided size during definition, the remaining elements are initialized to zero, e.g.:
double x[5]; // an array of 5 doubles is defined
int y[]={ 3 , 56, 4, 6 }; // an array of 4 int
float z[6] = { 0. }; // all 6 float members are set to 0.
int h[7]={ 13 , 26, 42 }; // an array of 4 int
When an array is defined, the appropriate amount of consecutive memory is allocated. Memory is also allocated for a constant pointer that is associated with the name of the array and which stores the beginning address of the memory allocated for the array. Therefore, the array name is itself a constant pointer that stores the address of the first element of the array, i.e. the address of memory where the first element is stored, e.g. x is equal to &x[0]. The name of an array is a constant pointer because it cannot be assigned a different address. Since the name of an array is a pointer, an individual element of an array can be alternatively accessed using pointer arithmetic, instead of using the index notation. In essence, the index notation mat[i] is equivalent to *(mat+i).
You should be particularly careful not to exceed the range of an array since the compiler does not make such checks. However, in most cases memory is wasted when using arrays since we often allocate much more memory that we will probably ever need. Dynamic memory allocation can be used to avoid this waste of memory by allocating dynamically during execution the exact required memory, e.g.:
double x[5];
x[0]= 5;
x[3] = x[0]+23;
/* Example on references, pointers and arrays */
#include <iostream.h>
#include <stdio.h>
main()
{
double x=11.1 , *px;
double &rx = x;
cout << “\n x = " << x << endl;
cout << " rx = " << rx << endl;
rx = 33.3;
cout << “\n x = " << x << endl;
cout << " rx = " << rx << endl;
px = &x;
ios::sync_with_stdio();
printf( “\n px = %p \n” , px );
ios::sync_with_stdio();
cout << " *px = " << *px << endl;
*px = 44.4;
cout << “\n x = " << x << endl;
cout << " *px = " << *px << endl;
double mat[] = { 10 , 20 , 30};
px = mat ;
cout << “\n mat[0] = " << mat[0] << endl;
cout << “\n px = " << *px++ << endl;
cout << " px = " << *px << endl;
cout << “\n px[1] = *(px+1) = " << *(px+1) << endl;
cout << " mat[2] = *(mat+2) = " << *(mat+2) << endl;
}
Output
x = 11.1
rx = 11.1
x = 33.3
rx = 33.3
px = 7fffae20
*px = 33.3
x = 44.4
*px = 44.4
mat[0] = 10
px = 10
px = 20
px[1] = *(px+1) = 30
mat[2] = *(mat+2) = 30
11. Strings as Arrays of char
In C++ there are two string representations: the traditional way of an array of characters, and, the newer standard C++ string class. For now, we consider the former string representation.
The string is stored as an array of char with a special character at the end ’\0’, which is called the terminator character, since it is used to indicate the end of the string. Therefore, memory space for an extra character must be provided to be able to store the terminator character.
Several library functions that can be used to manipulate strings are provided by the Standard C-library. To use them, the cstring header file, which contains their declarations, must first be included. The most commonly used are the following:
strcpy(char str1[] , char str2[]): strcpy copies the contents of str2 (including the terminator character) into str1
strcmp(char str1[] , char str2[]): strcmp compares the two strings alphabetically, returning zero if they are exactly the same, otherwise a nonzero value
strlen(char str1[]): strlen counts the number of characters in the string (not including the terminator character)
Although an array of strings can be initialized using the string notation (i.e. a literal enclosed in double quotes), it is not possible to assign a string to an array of char after its definition. A C-standard library need to be used to make this copying, e.g.:
char s1[] = “MIT” , char s2[4] ;
strcpy (s2,“MIT”);
char str3[] = { ‘M‘ , ‘I‘ , ‘T‘ , ‘\0‘ };
The following example demonstrates how a string can be defined, initialized or assigned a literal string, how can be modified, etc.
/* Example for strings as arrays of char */
#include <iostream.h>
#include <cstring>
int main(void)
{
char str1[] = “test” ;
const char *str2 = “Test” ;
char str3[50] ;
cout << “str1 and str2 are " ;
strcmp(str1,str2) ? cout << “different” << endl : cout << “the same” << endl;
cout << “\n str1 = " << str1 << endl ;
cout << " str2 = " << str2 << endl ;
strcpy(str3,str1);
strcat(str3,str2);
cout << “\n str3 = " << str3 << endl ;
str3[0] = ‘T’;
cout << " str3 = " << str3 << “\t length = “
<< strlen(str3) << endl ;
return 1;
}
Output
str1 and str2 are different
str1 = test
str2 = Test
str3 = testTest
str3 = TestTest length = 8
12. Arrays of Pointers
Since pointers are variables we can have arrays of pointers. Such arrays are often used to store the location in memory of a collection of data with the same type. Each element of an array of pointers is a pointer which can be used to point to a memory location.
e.g.: _double *pd[100]; _ // pd is an array of 100 pointers to doubles
_char *pc[20]; _ // pc is an array of 100 pointers to char
13. 2-D and Higher Dimensions Arrays
Multidimensional arrays (of any dimension) can be defined using additional brackets, one for each dimension. A multidimensional array can be initialized similarly to a 1-D array, with the option to use nested curly braces to group the data along the different dimensions (e.g. rows).
double mat2[6][3];
double mat3[5][3][2];
double m[][] = { {3 , 6.2 , 0.5} , { 23.7 , 0.75 , 4.8 } };
double m[3][10] = { {4.5} , {13.7} };
Although it is natural to think a 2-D array having a rectangular 2-D form, the elements of arrays (of any dimension) in C++ are actually stored in a contiguous memory location. The following graph shows how a 2-D array is stored. The top graph shows the virtual representation of a 2-D array, while the bottom one shows how actually it is stored in memory:
Therefore, the following expressions are exactly equivalent to m[i][j]:
*(m[i]+j)
(*(m+i))[j]
*((*(m+i))+j)
*(&m[0][0]+WIDTH_SIZE*i+j)
/* Example for 2-D arrays */
#include <iostream.h>
#include <stdlib.h>
#include <iomanip.h>
#define ROW_SIZE 4
#define COL_SIZE 7int main()
{
double m[ROW_SIZE][COL_SIZE] = { { 4.5 , 0.45 } ,
{ 13.7 , 67.3 , 17.7 } , { 2.6 } };
int i,j;for(i=0;i<ROW_SIZE;i++)
{
cout << endl;
for(j=0;j<COL_SIZE;j++)
cout << " " << setw(5) << m[i][j];
}
return EXIT_SUCCESS;
}
**Output
**
4.5 0.45 0 0 0 0 0
13.7 67.3 17.7 0 0 0 0
2.6 0 0 0 0 0 0
0 0 0 0 0 0 0
14. Return by Reference Functions
A function can either return nothing, in which case it is declared as void and a return statement is optional, or return a value. In the latter case the default is to return a value by-value, i.e. a copy of the value that is returned is passed to the function from where the terminating function was called. However, there are some cases that it is preferable to return a value by-reference, or using pointers. For example, it may be useful to return a reference to an object or variable so as to be able to manipulate it, or it may be more efficient to pass by reference, or using a pointer, a large user-defined object to avoid the overhead due to copying it.
When a function returns by-reference, i.e. returns a reference to a variable or object, the function call can be placed in the LHS of an assignment statement. However, a variable, or object, with local scope cannot be returned by-reference, since the memory allocated for it is released upon exiting the function.
The following example shows one such a case, in which a reference to a specific element of an array is returned.
/* Example on return by reference */
double & fun(int i, double *x);
void main(void);void main(void)
{
double x[10]={0};
fun(3,x) = 57.6;
cout << “\n x[5] = " << x[5] << endl;
cout << “x[6] = " << x[6] << endl;
}
double & fun(int i, double *x)
{
return x[2*i];
}
**Output
**
x[5] = 0
x[6] = 57.6
15. Dynamic Memory Allocation
Memory can be obtained dynamically from the system, in particular from a pool of free memory named the free store (or heap), after the program has been compiled, i.e. during execution, using the new operator. This operator can be used to allocate sufficient memory for one or more variables of any data type (standard or user-defined), i.e. for a single variable, or object, or an array of variables, or objects.
Memory is allocated dynamically using the operator new followed by a type specifier and in the case of an array followed by the array size inside brackets. In the case of a single variable an initial value can also be provided within parentheses. The new expression returns the address of the beginning of the allocated memory and can be stored in a pointer in order to access that memory indirectly. If the dynamic memory allocation is not successful a NULL (i.e. a 0 value) is returned by the operator new. The following statements allocate memory for one float and an int, and the address of that memory location is returned and then assigned to the pointer pf and pi, respectively. Dynamically allocated memory, if not explicitly initialized, is uninitialized with random contents.
float *pf = new float; // allocate memory for a float
int *pi = new int(37); // allocate memory for an int and assign an initial value to it
Similarly, the following statement allocates contiguous memory for a size number of doubles and then the address of the beginning of that memory is returned and assigned to the pointer pd. The size does not have to be a constant, i.e. known at compilation, but can be specified during execution of the program according to the program demands. However, there is no way to initialize the members of a dynamically allocated array.
double *pd = new double[size];
In contrast, to allocate memory for an array of doubles the size must be known prior to compilation, i.e. the size should be a constant. In the following definition of an array the name of the array is a constant pointer, since it cannot point anywhere else, while in the previous example pd can be used to store any memory address where a double is stored.
double mat[50];
Multidimensional arrays can also be dynamically allocated using a new expression. However, only the left-most dimension can be specified at run-time. The other dimensions need to be defined at compilation time, i.e. to have a constant size, e.g.:
_double (*pmat)[100] = new double [size][100]; _ // size does not need to be a constant
The allocation and release of memory for variables for which memory is statically allocated is done automatically by the compiler. Memory of local variables is automatically released upon exiting the function, unless they are defined as static, and that memory can be used for other purposes. However, dynamically allocated memory is not released upon exiting a function and care must be taken to avoid losing the address of that memory, resulting in a memory leak. Dynamic allocation and deallocation of memory is a programer’s responsibility. When memory that is allocated dynamically is not needed any more, it should be released using the delete operator as shown in the following example which is based on the previous one:
delete ps;
delete [] pd;
The brackets are required when the pointer points to consecutive memory of a dynamically allocated array, in order to release all memory that has been allocated earlier. Only memory that has been dynamically allocated (i.e. using the new operator) can be released using the delete operator.
Its a good practice to set the value of a dangling pointer, a pointer that refers to invalid memory, such as a pointer that was used to point to a released memory, to NULL (or 0). Then, we avoid the error of trying to read or write to an already released memory location. However, it is not wrong to apply the delete expression to a pointer that is set to 0, because a test is performed before actually applying the delete operator on the pointer. Therefore, there is no reason to check whether the pointer is set to 0. The delete operator should not be applied twice to the same memory location, e.g. by mistake when having two pointers store the same memory location, because it may lead to corruption of data that have already been stored after the first release of the memory.
If the available memory from the program’s free store is exhausted, than an exception is thrown, and as we’ll see later there are ways to rationally handle such exceptions.
16. The sizeof Operator
The sizeof operator gives the size in bytes of a variable, a built-in data type, or a user-defined data structure, or a class, it can be used to determine the number of bytes that are required to store a certain object.
e.g:
int i, mat[10]; double d; // On an athena SGI or SUN workstation:
sizeof(char); // returns 1 (byte)
_sizeof(int); sizeof i; _ // returns 4 (bytes)
_sizeof d ; _ // returns 4
sizeof mat; // returns 40
17. Data Structures
A data structure is very similar to a class and it is not often used in C++, since more features are provided by a class. A structure can be used to store as a single entity several different variables not necessarily of the same data type. Structures in C++ have some extra features from the structures in C, such as access restriction capabilities, member functions and operator overloading.
A data structure is defined using the keyword struct, followed by the name of the structure and then its body where it is defined. Then, to define an instance of the data structure we can use its name directly (the struct keyword is not required as in C). To access a member of a data structure the dot or the arrow operator are used depending on whether we have the actual data structure instance or a pointer to it.
Structures can be passed to a function as any other variable, i.e. by value, by reference, or, using pointers. Because data structures are large in size, pass by value is typically not preferred, in order to avoid the copy overhead.
/* Example on data structures */
struct point
{
double x;
double y;
};
typedef struct point Point; // the typedef is not necessary in C++int main()
{
Point p; // p is a data structure point
point *pp; // pp is pointer to a point data structurep.x = 3.2; // using the dot operator
pp = &p;
pp -> y = 7.5; // using the arrow operatorcout << “\n x = " << pp->x << “\t y = " << (&p) -> y << endl;
struct point p2 = {4.7 , 9.2}; // A data structure instance can be intialized using
pp = &p2; // comma separated values enclosed in curly braces
cout << " x = " << pp->x << " y = " << p2. y << endl;
}
**Output
**
x = 3.2 y = 7.5
x = 4.7 y = 9.2
Note: The typedef allow us to assign a name for a specific data type, built-in or user defined, and then, use it as a type specifier. In the above example, struct point p and Point p are exactly equivalent, since the following typedef has been defined: typedef struct point Point; The typedef keyword is followed by a data type and an identifier that we want to specify as alias to that data type.
18. Introduction to Classes and Objects
A class is a user defined specification that encapsulates in a single entity both data and functions that can operate on them. An object is an instance of a class and the class/object relation is similar to the built-in data type/variable relation.
A class is defined using the keyword class followed by the name of the class. A class typically has**data members, which contain the data that is stored using the class; member functions, which are functions that operate on these data; constructors, that are member functions with a name the same as the class name and are executed upon a creation of an instance of the class in order to make the proper initialization; destructors, that are used when an instance of a class goes out of scope; and many other features such as operatoroverloading**, declarations of friend functions, etc.
The following simple example demonstrates the use of a Point class with some of the most basic features of a class.
/* Example on classes and objects */
class Point
{
private:
double x,y;public:
Point();
Point(double x, double y);
void print();
};Point::Point()
{
cout << " In Point() default constructor " << endl ;
x = 0.0 ;
y = 0.0 ;
}Point::Point(double xx, double yy)
{
cout << " In Point(double,double) constructor " << endl ;
x = xx ;
y = yy ;
}void Point::print()
{
cout << " (x,y) = (” << x << “,” << y << “)” ;
}
int main ( )
{Point p1;
Point p2(17,45.75);cout << “\n Point P1: " ;
p1.print();cout << “\n Point P2: " ;
p2.print();return EXIT_SUCCESS ;
}
**Output
**
In Point() default constructor
In Point(double,double) constructor
Point P1: (x,y) = (0,0)
Point P2: (x,y) = (17,45.75)
|
common_crawl_ocw.mit.edu_75
|
These notes were prepared by Petros Komodromos.
Topics
- Classes and Objects
- Classes: member variables & member functions
- Classes: constructors & destructor
- Constructor header initialization
- Copy constructors
- Member variables & functions protection: private, protected & public
- Static class data and class functions
- Class scope
- Pointers to class members
- Operator overloading
- Friend functions
- Type conversions
1 . Classes and Objects
A class is a user-defined data type with which we can define not only data members (or data members), but also member functions to manipulate these data. It is essentially an aggregate of data elements and a set of operations to manipulate them.
The definition of a class consists of the class head (the keyword class and the class tag name, i.e. the class name) and the class body enclosed by braces { }; and terminated by a semicolon. The class body contains the member variables, and the definitions or/and declarations of the member functions. The access levels of the class members can be specified in the class body by placing declarations in certain parts of the class body.
The definition of a class should be provided in every source code files that uses the class. A class definition is allowable to appear many times in a program as long as it is identical in each case. Since the definition should be exactly the same, its preferable to have a class definition in a header files that is included whenever necessary, in order to avoid inconsistencies due to different definitions of the same class.
A class declaration is the class header followed by a semicolon. It can be used to inform the compiler that a certain class is defined somewhere in the program.
_class MyComplex; _ // Class declaration
_class MyComplex _ // class head Class definition
_{ _ // class body
public:
double real; // member variables
double imaginary;MyComplex() // default constructor definition
{
real = 0.0 ;
imaginary = 0.0;
}MyComplex(double r, double i) // inline, since its definition is provided
{
real=r;
imaginary=i;
}~MyComplex() // Destructor definition
{
…….
}double get_real(void); // Member function prototype
void print(void); // Member function prototype
void set_real(double) // Member function definition
{
……………
}
};_double MyComplex:: get_real(void) _ // Externally defined member function
{
……….
}void MyComplex::print(void)
{
……….
}
Definition of an Object
An object is defined using the class’ name, in the same way a built-in data type is used to define a variable of that data type. The keyword class can optionally be used before the name of the class. Memory, sufficient to store the data members of an object, is allocated as soon as an object is defined.
e.g.:
MyComplex x;
double d;
MyComplex y(3,2.5);
class MyComplex t,r;
Access of an Object
A publicly declared data member, or a member function, of an object can be accessed using the dot operator (.)
x.real = 12;
y.imaginary = 2.5;
To access a member data or function of an object using a pointer to that object, the arrow operator (->) can be used instead. Alternatively, the pointer can be dereferenced and then the dot operator can be applied on the dereferenced pointer.
MyComplex *px ;
px = &x ;
px ->real =24.5;
(*px).real =24.5;
2. Classes: Data Members and Member Functions
The class body typically contains the class data members and the member functions. It may also contain constructors, a destructor, friend function declarations, operator overloadings, etc. Data members are the variables in which the state of each instance (i.e. object) of a class is stored, while member functions are used to specify the behavior of any instance of the class.
Data members (or member variables) of a class are usually defined in the private part of the class definition in order to restrict access to them. Data members can be of any build-in, or user-defined, data type. A class cannot have data members of its own type, although it may have pointers or references to such data type objects.
A member function is a class specific function which is declared, or defined, in the body of the class definition, and it is always associated with a certain object, i.e. a specific instance of the class, the one that has been used to call the function. A member function can be invoked using one of the class-member selector, the dot operator (.) for objects (i.e. instances of that class), or the arrow operator (->) for pointers to objects.
classObject.memberFunctionName(arguments) ;
pointerToClassObject -> memberFunctionName(arguments) ;
e.g., from the previous example:
MyComplex x, *px ;
double y ;
y = x.get_real() ;
px = &x;
px -> get_real() ;
(*px).get_imaginary();
Whenever a member function is called, a special parameter, named this, is implicitly used. Actually, whenever a (non-static) member function is invoked a pointer to that class type that points to the object that is used to invoke the function, is implicitly passed as an argument (behind the scenes) to the member function. The member function has an additional parameter, a pointer to that class data type, named this. The parameter “this” is a pointer to the object which was used to invoke the function. Therefore, this special parameter can be used as any other pointer to access explicitly a member variable, or function, of the object, e.g.:
_this -> member_variable _ or this -> member_function(….)
Thus, the actual object with which the member function is invoked can be obtained by dereferencing this special pointer, i.e. by using *this. This special pointer can be used to resolve name conflicts, e.g. when an argument has the same name as the data members of the class. Pointer this can also be used to return the object with which the member function has been invoked. Finally, this can be used to check whether the object that was used to invoke a member function is the same with an argument passed to the member function by comparing this with the address of the object passed as parameter, e.g. when copying one object to another.
Data members and member functions of a class can be accessed from inside a member function that has been invoked with a certain object of that class without the need to use the dot or the arrow operator. The object pointed by this is the one with which the member function is invoked.
A member function can be overloaded, as any other regular function. The compiler uses the signature of the alternative member functions and the data type of the passed arguments to select the proper function to invoke. The constructors of a class are typically overloaded considering all possible arguments that can be used during the definition of an object of the class.
An externally defined member functions can be defined outside the class, as long as its declaration has been provided in the class body. The function definition consists of the header and the body of the definition. The header is similar to the function declaration (prototype) with the only difference the specification that defines to which class the member function belongs. This is indicated by providing the class name followed by the class scope resolution operator after the return data type, i.e. before the name, of the member function.
returnDataType className :: memberFunctionName(arguments)
{
………
}
A member function that is defined within the body of the class definition (i.e. not externally), is automatically considered to be an inline function. Functions defined outside the class body, that are small and frequently called, can be declared as inline to save the overhead due to the function call. The declaration of a function to be inline can be done either in the function declaration inside the class body, in the header of the class definition, or both. However, the definitions of externally defined functions declared as inline should be placed into a header file that can be included in every source code file that invokes that function.
Typically the member data of a class are defined in its private part which provides restrictions on the accessibility of them, by allowing only to the members of this class to access it, supporting information hiding. Most member functions are defined in the public part of the class, providing a public interface to the private part.
A class may also have another class as one of its member, and the latter class is called a nested class. In general, a nested class and its enclosing class follow the usual access privileges, i.e. they do not have access to the private members of each other.
Finally, a class can be defined within a function body, i.e. having a local scope. However, the members of a local class cannot be defined outside the class definition, and static members are not allowable since they require a definition outside the body of the class definition which is impossible.
3. Classes: Constructors & Destructor
Constructors and destructors are special member functions that are automatically, i.e. implicitly, invoked when an object is created, i.e. when defined, or destroyed, i.e. going out of scope. It is allowable to define within curly braces values to which the member data of an object should be initialized as long as the members are public, e.g.: Point p = { 3 ,1.5}. However, with the exception of specific applications that need to initialize huge number of data members with constant values, it is preferable to use constructors to explicitly define the desired initializations while retaining the data hiding and encapsulation of C++.
A constructor is automatically called whenever a new class object is created allowing the explicit initialization of the member data upon the creation of the object. The name of a constructor is the same as the name of the class. A constructor should not have a return type specified, not even void, although it does not return anything. A constructor is used for initialization, assignment of certain values, that may be implicitly passed to it as arguments, type conversion, and dynamic memory management. You may have many constructors as long as they have different signatures, i.e. with different arguments so that the compiler can distinguish among them which one to call. It is preferable to provide a default constructor, rather than let the compiler to implicitly define and use one. This is necessary when having pointers as data members, since the constructor that is implicitly employed by the compiler probably cannot do the correct dynamic memory allocation and copying.
The default constructor is a constructor that does not necessarily requires arguments when it is invoked. It is called automatically whenever a new object is created without providing arguments at the definition. You can also use a constructor with parameters as the default constructor by assigning default values to its parameters, which allows its invocation without using any arguments. However, if constructors, but not a default one, are defined, it is not allowable to define an object without specifying the required arguments so as to invoke the one of the existing (and non-default) constructors. Therefore, it is preferable to always define a default constructor if any other constructor is defined.
/* Example on constructors */
class MyComplex
{
public:
double real;
double imaginary;MyComplex() // default constructor
{
real = 0.0 ;
imaginary = 0.0;
}MyComplex(double real, double imaginary)
{
this->real = real;
MyComplex::imaginary = imaginary;
}MyComplex(const MyComplex &c) // copy constructor
{
real = c.real ;
imaginary = c.imaginary ;
}
};int main()
{
MyComplex x;
cout << “\n x = " << x.real << " + “
<< x.imaginary << " i " << endl ;x.real = 15.5;
x.imaginary = 2.5;
cout << " x = " << x.real << " + “
<< x.imaginary << " i " << endl ;double r=3.3;
double i=7.5;
MyComplex y(r,i);
cout << " y = " << y.real << " + “
<< y.imaginary << " i " << endl ;MyComplex z(x);
cout << " z = " << z.real << " + “
<< z.imaginary << " i " << endl ;
}_Output
_ x = 0 + 0 i
x = 15.5 + 2.5 i
y = 3.3 + 7.5 i
z = 15.5 + 2.5 i
A constructor can be invoked using any of the following two forms to define an object:
MyComplex c(7.3, 0.65);
MyComplex c = MyComplex(7.3, 0.65);
Any constructor, as well as the destructor, can also be defined as inline to avoid the overhead of the function call, when it is defined outside the class. A constructor cannot be defined using the const keyword to consider the object pointed by the pointer this as constant, because the const property of an object is set after the constructor returns and the object is completely initialized.
Typically, the constructors are defined in the public section of a class definition. However, in some cases a constructor may be declared as a private member to prevent the definition of an object of that class using specific data type parameters as arguments (or no arguments for the default), or to forbid, in general, the use of objects of that class.
A constructor with a single parameter can serve as a conversion function, and the compiler can implicitly invoke such a constructor to convert a data type variable to the constructor’s class. To avoid the implicit use of a constructor as conversion function, we can declare an explicit conversion rule.
An array of objects can be defined and initialized using the following form. With this statement an array of 3 objects is defined. Each of them is initialized using the provided values and the corresponding constructor, i.e. the MyComplex(double real, double imaginary) constructor.
MyComplex mat[]= { MyComplex(3,5), MyComplex(7,1), MyComplex(2,4) };
Another special member function in C++ is the destructor, which has the name of the class preceded by a tilde (~). The destructor is used for cleanup that may be required whenever an object goes out of scope and before, the memory allocated for it, is released, or, when the delete operator is used to free memory dynamically allocated for an object. The destructor is used to free resources allocated by the constructor, such as release dynamically allocated memory, close files, etc. However, many classes do not need a destructor, because no resources need to be deallocated, and no special actions need to be performed when an object is “destroyed”.
The destructor is automatically called whenever an object is destroyed, either because of going out of scope, or because its dynamically allocated memory is released using the delete operator. Memory dynamically allocated for an object of a class can be reallocated (i.e. release) using the delete operator which invokes the corresponding destructor. Release of dynamically allocated memory, typically allocated earlier by a constructor, is done in the destructor using the operator delete (or delete[]) in order to avoid memory leaks. To release memory dynamically allocated for an array of objects (built-in or user defined) the brackets are required to ensure that the entire memory is released and all necessary calls to the class destructor have been made. The destructor of an object can be explicitly invoked without necessarily releasing dynamically allocated memory by calling the destructor using the pointer to the object, the arrow operator and the destructor name, i.e. the class name following a tilde.
It is illegal to specify a return type, including void, for the destructor of a class, as well as to specify any parameters. Therefore, there can be only one destructor per class.
When a function is called passing an object by value, a temporary object with a copy of the object is created using the copy constructor and allocating temporarily memory to store the object parameter. Similarly, when an object of a class is returned by value from a function, the copy constructor is implicitly invoked to allocate the necessary memory and initialize the object’s data, member according to the object returned by the function.
4. Constructor Header Initialization
An alternative way to initialize the data members of an object is using header initialization, which is a comma separated list of data members with the desired initial values in the constructor’s definition. It is also known as member initialization list. The header initialization is achieved by using a colon after the header of a constructor followed by a comma separated list of the data members to be initialized and the value to which each of them is to be initialized inside parentheses. Typically, the parameters that are passed as arguments to the constructor are used to provide values for the data members. Using a member initialization list is considered to be an initialization, while initializing the members inside the constructor’s body is considered to be an assignment.
The header initialization is preferred, in cases of user defined data members considering performance, relative to assignment, since the latter involves extra calls to constructors. The use of header initialization is necessary when a constant member data must be initialized, since a const data type is not allowable to appear on the LHS of an assignment, i.e. it is illegal to initialize a const in a constructor’s body. In addition, a reference data member can also be initialized only using a member initialization list, since it cannot appear of the LHS of an assignment.
The following example demonstrates the use of a constructor header initialization.
class MyComplex
{
private:
double real;
double imaginary;public:
MyComplex(double re=0, double im=0) : real(re), imaginary(im) { }
void print(void);
};
void MyComplex::print(void)
{
cout << real << " + " << imaginary << " i " ;
}int main()
{
MyComplex x, y(7, 2.1);
cout << “\n x = " ; x.print() ;
cout << “\n y = " ; y.print() ;
}_**Output
**
_ x = 0 + 0 i
y = 7 + 2.1 i
5. Copy Constructors
A copy constructor is a constructor with one parameter, an object of the same class of which the constructor belongs passed by reference. It is used whenever an object is explicitly initialized with another object of the same class as argument. It is also used whenever an object is passed as an argument to a function, or, when an object of the class is returned from a function by value. Finally, the copy constructor can also be used when an object is assigned using the assignment operator another object, when the assignment operator is not explicitly overloaded, or when the assignment operator is used to initialize an object at its definition.
If a copy constructor is not provided a default member wise initialization takes place, which in some cases may not be the proper action, e.g. when having pointers as data members, to take.
_MyComplex(const MyComplex &c) _ // copy constructor
{
real = c.real ;
imaginary = c.imaginary ;
}
6. Member Variables and Functions Protection: Private, Protected, and Public
You can specify different access privileges to specific member data and functions by selectively defining them in the private, protected, or public parts of the class definition. These sections, i.e.the private, protected, and public parts, are specified using the corresponding **access specifiers ** keywords private, protected and public.
The member variables and functions declared, or defined, in the public part of a class are accessible by everywhere within the program without any limitation. Usually the member data of a class are defined in the private (or protected) part and member functions to access them are defined in the public part of the class.
The member variables and functions declared, or defined, in theprivate part of the class definition can be accessed only by member functions defined in the same class and by friend functions of the class. Member functions declared, or defined, in the private part of the class definition, i.e. private member functions, can be invoked only by member functions of the class, or by friend functions to the class, similarly as the data members.
Finally, the member variables and functions in the protected part are accessible only by member functions defined in the member class, or in subclasses of that class, and any friend functions of the class.
Member functions of a class have access to variables and functions defined in any part, private, protected or public, of the class. Typically all data of a class, i.e. its member variables, are defined in its private or protected parts to restrict access to them which provides information hiding. The member functions, which represent the behavior of the class that should be accessible to the user of the class, are typically defined in its public section providing a public interface of the class.
There can be any number of labeled with the access specifiers, i.e. public, protected and private, sections. The access level that is specified remains the same until a new access specifier is encountered. The default access level is private, in case no access specifier is specified.
An object that is passed by reference to a member function of a class, using another object to invoke the function, can be protected against modification by declaring the corresponding parameter as const, e.g.:
MyComplex(const MyComplex &c);
The object that is used to invoke the member function, i.e. the object pointed by this, can be protected by declaring is as const. This is specified after the parameter list and before the body of the member function in the definition of a function, e.g.:
double get_real(void) const { ………… }
If the function is externally defined, it must also be specified as const after the parameter list and before the semicolon in the function declaration, e.g.:
double get_real(void) const;
An object declared as const is considered constant after its initialization, i.e. by a constructor, is finished and ends up when its deletion, i.e. using a destructor, starts. Therefore, constructors and destructors, which are never defined as const member functions, can be invoked by a constant object. In contrast, a non-const member function cannot be invoked by a const object.
Modifying the last example by putting the member variables in the private part, we no longer have access to them from outside of the class. Therefore, we must provide functions that can read their values and functions and modify their values. With these member functions which are defined in the public part of the class definition we have indirect access to the private member data.
/* Example on member variables & functions protection */
class MyComplex
{
private: // private part
double real;
double imaginary;public: // public part
MyComplex() // default constructor
{
real = 0.0 ;
imaginary = 0.0;
}MyComplex(double r, double i) : real(r), imaginary(i) // header intialization
{
}MyComplex(const MyComplex &c) // copy constructor
{
real = c.real ;
imaginary = c.imaginary ;
}~MyComplex()
{
// cout << “\nAn object has been detroyed” << endl;
}double get_real(void) const ;
double get_imaginary(void) const;
void **set_real(**double);
void set_imaginary(double);
};
// Member functions defined outside the body of the class definition
double MyComplex::get_real(void) const
{
return real;
}double MyComplex::get_imaginary(void) const
{
return imaginary;
}void MyComplex::set_real(double real)
{
this -> real = real ;
}void MyComplex::set_imaginary(double im)
{
imaginary = im;
}int main()
{
MyComplex x;
cout << “\n x = " << x.get_real()
<< " + " << x.get_imaginary()
<< " i " << endl ;double r=3.3;
double i=7.5;MyComplex y(r,i);
cout << " x = " << y.get_real()
<< " + " << y.get_imaginary()
<< " i " << endl ;
return EXIT_SUCCESS;
}_**Output
**
_ x = 0 + 0 i
x = 3.3 + 7.5 i
7. Static Class Data and Class Functions
If a data member of a class is defined using the keyword static before its data type, then memory is allocated for only one such element for the entire class, irrespectively of the number of instances (i.e. objects) of that class. The lifetime (i.e. the extent) of this static data is the entire program and there is only one such a variable shared by all objects of the class. A static class data is typically used to store information common to all objects of a class and to avoid unnecessary duplication of information.
Memory space is allocated for each static class variable only once even if there are no objects of that class. Not only member data i.e. variables, but also member functions can be defined as static. The latter are used to manipulate the former. A function is declared as static in the class body, i.e. at its declaration, and not at its definition.
A static class member, data or function, can be accessed using an object and the dot, operator, or a pointer to an object and the arrow operator. In addition, it can be accessed using the class name followed by the class scope resolution operator (::).
Because the pointer this is not associated with function calls to a static member function, it is a compile time error to attempt to access directly non-static members of the class from a static function.
The access levels and constraints of a static class member, data or function, are the same as those of non-static members. The only exception is when a static variable is initialized. Then, the access level is relaxed to allow the initialization, as shown in the following example.
A static class member is defined and initialized outside the class definition, as any other non-member global variable, i.e. outside of any function. The definition of a static member should appear only once in a program and, therefore, it should not be placed in a header file.
Someone could alternatively use a regular global variable to store information that refers to the entire class and not to individual objects. However, the use of static class members should be preferred since it provides all advantages of object-oriented programming, namely information hiding, data encapsulation, physical and direct correspondence and association of the specific information with the class, etc.
Because there is only one instance (one copy) of a static member data of a class, a static member data can be of the same type as the class itself.
The following example shows how a static class variable and function are defined and used.
_/* Example on static class data and functions */
_class Employee
{
private:
char *first_name ;
char *last_name ;
double salary ;
int social_security ;
static int employeesNumber; // static class data declarationpublic:
Employee(char *first=“None”,char *last=“None”, double sal=0.0, int soc=0)
{
first_name = new char[strlen(first)+1] ;
last_name = new char[strlen(last)+1] ;
strcpy(first_name,first) ;
strcpy(last_name,last) ;
salary = sal ;
social_security = soc ;
employeesNumber++;
}
……
static void printEmployeesNumber(void);
// static class function declaration
}**void Employee::printEmployeesNumber(void) ** // static class function definition
{
cout << “\n Number of employees: " << employeesNumber;
}
int Employee::employeesNumber=0;
// static class data definition and intializationint main ( )
{
Employee::printEmployeesNumber();Employee a;
a.printEmployeesNumber();char first_name[20]=“Bugs”;
char last_name[30]=“Bunny”;
double salary=100000 ;
int social_security=103038 ;
Employee b(first_name,last_name,salary,social_security);
b.****printEmployeesNumber();
}
8. Class Scope
The member data and functions of a class are considered to belong in the corresponding class scope. Inside the class scope, in general, there is no need to specify the class that a member belongs so as to access it. The body of a class definition, the code that follows the name of an externally defined member function up to the end of the body of its definition, and the code following the name of a static member at its definition up to the semicolon are all considered to be in class scope. However, outside class scope the access operators, i.e. the dot and arrow operators, and the class scope resolution operator must be used to specify the class scope in which the member belongs.
When an identifier, i.e. a variable or function name, is used in a class definition, first the declarations of the already declared members are considered, and if no member matches the name the declarations in the namespace scope (e.g. the global scope) located before the class definition are considered. When an identifier is used in a member function of a class the resolution of the name starts with the local scope declarations, e.g. local variables and function parameters, then if nothing is found, it continues with declarations of all members of the class. Finally, if the name is still not resolved the declarations that appear in the namespace scope are also considered.
_/* Example on class scope */
_class MyClass
{
public:
int number;
};int number = 33;
void main()
{
MyClass n;
n.number=22;
int number = 11;cout << “\n number = " << number ;
cout << “\n n.number = " << n.number ;
cout << “\n ::number = " << ::number<< endl;
}_**Output
**
_ number = 11
n.number = 22
::number = 33
9. Pointers to Class Members
Member data can also be accessed using pointers to specific member data. To define a pointer to a member data of a class the name of the class followed by the class scope resolution operator must be used between the data type of the variable to which the pointers may point and the dereference operator (*). Then, the pointer can be assigned the address of a specific member data of the class using the address-of operator (&) followed by the class name, the class scope resolution operator and the specific member data name. Having defined a pointer to a specific data member of a class, the pointer can be dereferenced and used with any instance ,i.e. object, of the class, as shown in the following example. Therefore, a specific object should be used when using a pointer to a data member.
Similarly a pointer to a member function can be defined. Again it is necessary to provide the class type whose member is the function, in addition to the return type and the number and type of the parameters of the function.
Note that pointers to static member data and functions should be defined as regular pointers to variables and functions, i.e. without specifying the class. No association with a specific object when accessing a member data needs to be resolved, and no this pointer is associated with static member functions calls.
_/* Example on the use of pointers to class objects */
_class MyComplex {
public:
double real, imaginary;
void print() { cout << real << " + " << imaginary << “i “; }
};void main()
{
MyComplex x, y, *py=&y ;
**double MyComplex:: *pd; ** // pointer to a double data member
void (MyComplex::*pf)()=0; // pointer to a member functionpd = &MyComplex::real;
x.*pd = 1.1;
y.*pd = -22.4;
pd = &MyComplex::imaginary;
x.*pd = 0.3;
y.*pd = 44.5;
cout << “\n x = " << x.real << " + " << x.*pd << " i " ;
cout << “\n y = " << y.real << " + " << y.*pd << " i " << endl;pf = &MyComplex::print;
cout << “\n\n x = " ;
(x.*pf)();
cout << “\n y = " ;
((*py).*pf)();
cout << “\n y = " ;
(py->*pf)()****;
}_Output:
_ x = 1.1 + 0.3 i
y = -22.4 + 44.5 ix = 1.1 + 0.3i
y = -22.4 + 44.5i
y = -22.4 + 44.5i
10. Operator Overloading
C++ allows us to define new definitions for operators to be used with user-defined data types, i.e. objects. This is feature is called operator overloading and allows us to give to normal operators additional meaning when they are applied to user defined data types.
All operators can be overloaded except the following ones in double quotes: “.”, “.*”, “::”, “?:”, and “sizeof”. The subscript [ ], function call ( ) and arrow access ->, operators can be overloaded only as member functions. An operator overloading function needs to be either a member function of a class, or have a class object as parameter, except when the overloaded operator is new, delete, or delete[].
To overload an operator we need to define a member function with the keyword operator followed by the operator that is overloaded, instead of a name for the member function. This declaration syntax informs the compiler that this member function should be called whenever the particular operator is encountered next to an object of this class as operand. A member function that is overloading an operator can also be overloaded as a function, having the same operator overloading function in several forms, as long as each of them has a unique signature, i.e. differs in its parameters from all others. The compiler distinguishes among overloaded operators by looking at the operator and the data types of its operands. The precedence and associativity of operators is retained when overloaded. It is not possible to define additional operators for the built-in data types. Also it is not possible to change the arity of an operator, e.g. use a unary operator as a binary and vice versa, unless its one of the four operators that have both a unary and a binary form, (+), (-), (*), and (&).
Member functions that overload operators require one less argument than the number of the operands used on the operator, since the one operand is the object whose member function is invoked, i.e. *this.
The following example demonstrates the definition and use of an overloaded operator. A unary operator (++) and a binary operator (+) are defined and whenever either of these is encountered together with an object in an expression, the corresponding member function is invoked.
/* Example on operator overloading */
class MyComplex
{
…….
void operator ++(void); // member function declarations
Complex operator +(const MyComplex &c);
};void MyComplex::operator ++(void)
{
++real;
}
// member function definitionsComplex MyComplex::operator +(const MyComplex & c)
{
MyComplex sum;
sum.real = real + c.real;
sum.imaginary = imaginary + c.imaginary;
return sum;
}main()
{
MyComplex x,y(5,2.4);
MyComplex z = **++**x + y;
}
Although the input and output operators are usually overloaded as friend functions, an alternative way is to define it as a non-friend operator overloading function and provide proper get and set member functions that can be called from inside the overloaded operator functions.
ostream& operator << (ostream &o, const MyComplex &c)
{
o << c.get_real() << " + " << c.get_imaginary() << " i " ;
return o;
}int main()
{
MyComplex x;
….
cout << " x = " << x << endl;
}
An alternative way to access an operator overloading member function is to use its actual name which consists of the keyword operator followed by the specific operator that is overloaded. For example the member function that overloaded the operators ++ and +, in the previous example can also be accessed as follows.
int main()
{
MyComplex x,y(5 , 2.4);
MyComplex z ;
x.operator++( );
z = x.operator+(y);
cout << “\n x = " << x << " y = " << y << " z = " << z;
}
If no assignment operator is overloaded, a one-by-one member copy is performed by default using a compiler-provided assignment operator that is implicitly invoked. However, there are some cases in which such a “shallow” copy is not our intention, e.g. when there are pointer data members pointing to dynamically allocated memory. In those cases, an assignment operator can be used to make a “deep” copy, i.e. instead of copying pointer values, resulting in pointer data members of two objects to point to the same memory location, memory is dynamically allocated and the contents in the memory pointed by the source-object pointer is copied at the memory location pointed by the corresponding pointer of the other object (the target one).
When an initialization of an object is done at its definition using an object of the same class, even if there is an assignment operator overloading available, the copy constructor is used, instead, to initialize the object.
Postfix and prefix versions of increment (++) and decrement (–) operators can be overloaded. The methods that overload the postfix operators have an additional integer parameter that is used to distinguish them from the prefix versions, i.e. the postfix form is defined as binary operator with an auxiliary extra operand of type int. The prefix version can be invoked using ++x, or x.operator++(), in the operator or method form respectively. The postfix version, operator++(int), can be invoked using x++, or x.operator++(0), (any number can be used as parameter).
The memory management operators new, new[], delete, ordelete[], can also be overloaded to achieve specific memory management requirements. The overloaded new operator should return a type pointer to void and have its first parameter of type size_t, (size_t is a typedef defined in the header file cstddef).
e.g:. void * operator new (size_t s) { ……… }
The delete operator should return void and have a parameter of type void* which points to the memory that is to be released.
e.g.: void operator delete (void * p) { ……… }
In both cases other parameters of any type are optional. If the new and delete operators are overloaded, they are automatically invoked every time the operators are used, instead of the provided standard ones. The global new and delete operators can still be selectively called by using the global scope resolution operator,
e.g MyComplex *pc = ::new MyComplex and ::delete pc.
Similarly the array versions new[] and delete [] can be overloaded and used.
11. Friend Functions
A friend function is not a member function of a class, but a function that is granted special access privileges to all member data and functions of a class. This is achieved by declaring the function in the class body using the keyword friend which gives unlimited access to that function, even to the private part of the class. A friend declaration may appear in any section of the class definition without any effect in which, private, protected, or public, part it appears.
A friend function can be a member function of another class, or even all the member functions of another class. One case where friend functions are useful is when a function needs to have access to two or more unrelated classes. In addition, friend functions allow more flexible operator overloadings, since the object of the class is passed as an argument and the function its not an object’s member function.
For example if we overload the + operator as a member function of MyComplex class then we can add two objects of this class c1+c2, and an object of this class and a number, e.g. c1+4.5, assuming for the latter case that a convert constructor is available to be used to convert the number to a MyComplex object. An overloaded operator of a class is considered and may be invoked only if an instance of the class (i.e. an object) appears to the left of the operator. However, the addition 4.5+c1 is not valid because operator+ is a member function of the class of c1. Using a friend function to overload the operator+ both c1+4.5 and 4.5+c1 are valid because in both cases the convertion constructor of MyComplex is invoked to convert it, if necessary, to a MyComplex object. However, in the latter case we need to provide a way to make the conversion from a double to an object of our class.
Input and output overloaded operators are typically defined to be friend functions in order to have access to the data members of the class.
The following example shows a use of two friend functions, of which the one is overloading the input operator:
_/* Example on friend functions */
_class MyComplex
{
private:
double real, imaginary;
……..
friend void printMyComplex(const MyComplex &c);
friend istream& operator » (istream &i, MyComplex &c);
};void printMyComplex(const MyComplex &c)
// a friend function has unlimited access
{
cout << c.real << " + " << c.imaginary << " i " ;
}istream& operator » (istream &i, MyComplex &c)
{
cout << “\n Please give the real part: " ;
i » c.real ; // access to private members
cout << “\n and the imaginary part: " ;
i » c.imaginary ; // access to private members
return i;
}int main()
{
MyComplex x;
cin »x;
printMyComplex(x) ;
}
A function may be declared as friend for more than one classes. Also a member function of a class may be declared as friend for another class. In addition, a whole class, i.e. all its member functions, may be declared as friend for another class, which grants access to all member functions of the friend class to all member data and functions, even those defined in the private part, of the other class.
class Point
{
……..
friend Design::draw(); // the member function draw() of
// the Design class is declared friend
friend Spline; // the Spline class, i.e. all its member
}; // functions, is declared friend
There are cases in which overloading an operator needs to be done using a friend rather than a member function. For example, if the multiplication operator (*) is overloaded using a member function, in particular using a set of overloaded member functions with the same name to allow the multiplication with any possible data type, i.e. an int, double, a MyComplex, etc. Then, although the multiplication of a MyComplex number with a different data type value is allowable when the latter is on the right of the operator, the case of having the Mycomplex on the right is not allowable. For that case a friend function can be used as shown in the next example.
class MyComplex
{
………
friend MyComplex operator+(double d, const MyComplex &c);
};MyComplex operator+(double d, const MyComplex &c)
{
MyComplex sum;
sum.real = d + c.real;
sum.imaginary = c.imaginary;
return sum;
}int main()
{
MyComplex x(3,1.5), y;
y = 17.5 + x;
}
12. Type Conversions
Implicit type conversions are performed when different built-in data types occurred in mixed expressions. The rules that govern these conversions are specified by the language as we have seen in an earlier recitation. C++ allows the definition of conversion rules for user-defined data types that can be used when conversions from one data type to another are required.
Member functions can be defined and used to achieve certain conversions when objects are used as operands to operators (either built-in or overloaded), or as arguments to functions. These functions are implicitly invoked by the compiler whenever necessary to handle conversions.
Even when no explicit conversions are provided the compiler tries to use constructors that are related with the conversion that has to be performed, e.g. by assigning to a user-defined data type object of the constructor’s class a different data type. By default a constructor with one parameter may be used by the compiler for a type conversion, as a conversion function. The following example shows how a constructor is employed to make a type conversion from a built-in data type to a user defined, i.e. a class type.
_/* Example on the use of a convert constructor */
_class LengthFT
{
public:
int feet;
double inches;LengthFT(double d) // convert constructor
{
cout << “\n Using the convert constructor” ;
feet = (d*100/2.54)/12;
inches = d*100/2.54 - 12*feet ;
}
};int main()
{
LengthFT x;
double distance = 0.65;x = (LengthFT)1.45; // Type casting (conversion) using the convert constructor
cout << “\n x = " << x.feet << " - " << setprecision(3) << x.inches << “’” << endl;x = distance; // Implicit type conversion using the convert constructor
cout << “\n Distance (m) = " << distance << endl ;
cout << " x = " << x.feet << " - " << x.inches << “’\n” << flush;return EXIT_SUCCESS;
}_**Output
**
_ Using the convert constructor
x = 4 - 9.09”Using the convert constructor
Distance (m) = 0.65
x = 2 - 1.59”
In addition, operator overloading functions may also be used to handle different data types. An explicit conversion rule can be defined using a conversion function. A conversion function can be used to define how a conversion between a user-defined data type and another data type, user-defined or built-in, should be performed. A type conversion function is defined using the keyword operator followed by the data type name. Although a (converted) value is returned the function declaration and definition should not specify a return data type. Also a parameter list should not be defined. A conversion function can be invoked by an explicit cast, or when a mixed expression is encounter and conversions are necessary to be performed.
The following example shows a very simple case where a conversion function is defined in the class LengthFT to convert a LengthFT object to a double (in this case considering only its real part. The class LengthFT is used to represent a length in feet-inches form and whenever appears in a mixed expression we want to convert it to a double and express the length in meters.
_/* Example on conversion functions using explicit conversion*/
_class LengthFT
{
public:
int feet;
double inches;
LengthFT(int f=0, double i=0)
{
feet = f;
inches = i;
}
operator double();
};LengthFT::operator double()
{
return 0.0254*(feet*12+inches);
}int main()
{
LengthFT x(6,3);
double distance=4.2;cout << “\n x = " << x.feet << " - " << x.inches << “’” << endl;
distance += x; // the member function LengthFT::operator double() is called
cout << " Distance [m] = " << distance << endl ;
cout << " x [m] = " << x << endl ;
** // LengthFT::operator double() is called**return EXIT_SUCCESS;
}_**Output
**
_ x = 6 - 3”
Distance [m] = 6.105
x [m] = 1.905
|
common_crawl_ocw.mit.edu_76
|
These notes were prepared by Petros Komodromos.
Topics
- Inheritance: public, protected and private derivation
- Multiple inheritance
- Inheritance: constructors and destructors
- Inheritance: redefining member functions
- Virtual functions and polymorphism
- Abstract classes
- File streams
Appendix: Extra Material
1. Inheritance: Public, Protected and Private Derivation
Inheritance is the ability to create a new class, called derived class, from an existing one, called base class. The derived class is also called subclass of the base class, which in turn is called superclass of the derived class. A derived class inherits all the data members and member functions of its superclasses, and, in general, it implements an is-a relationship. In contrast, a has-a relationship is implemented using a class object as a member of another class. New members can be defined in the derived class refining the definitions of its superclasses. Inheritance can be used to extend the capabilities of a base class. Classes can be derived by classes that are themselves derived and this process can be extended to an arbitrary number of levels of inheritance.
An object inherits data members and member functions from the superclass of its class, and from all superclasses of that class. Member functions of a derived class can only access members of its base class that are declared in the public or protected part, i.e. private members cannot be accessed. The derived class methods invoked by a derived class object can access only the protected members that have been inherited from the base class of the particular object that invoked the member function (i.e. *this), or any other object of the derived class. The derived class member functions have also access to all private and protected, as well as public of course, members of its own class.
The derived member functions and data of a derived class object, can be accessed directly as if they were members of the derived class and not of the base class, as long as access is permitted. However, if the same name is used for both a member of the base class and a member of the derived class, the member of the derived class hides the corresponding member of the base class. By using the class scope resolution operator, the base class member, instead of the derived one, can be explicitly accessed. If the base class has a static member data then there will be only one instance of that regardless of how many subclasses have been derived from the base class and how many objects have been created.
To specify the base class of a derived class a colon is used after the name of the class, at its definition, followed by a specifier that defines the type of inheritance and the name of the superclass. The expression after the colon, which is used to specify the inheritance, is called class derivation list. To define a subclass using public derivation the keyword public should be used. In addition to public derivations, we can use protected and private class derivations. The following example shows a public derivation of a class.
class derivedClassName : public baseClassName { _ ……….._ };
The colon (:) specifies that the derivedClassName class is derived from the baseClassName class. The derived class inherits the member variables and functions of the class from which it is derived, as well as the member variables and functions of all superclasses of that class.
The access to specific member variables and functions of the superclasses of a class are specified according to the location where they have been defined (i.e. in which section of the class definition) in those superclasses, and the way that the subclasses are derived, i.e. which of the keywords public, protected, and private has been used during the derivations. The access to members of a superclass and the “cost” of that access do not depend on the depth of the inheritance tree.
With a public derivation of a subclass all member variables and functions of the superclass retain their status in the derived subclass, i.e. a public member remains public allowing (unlimited) access to everyone.
Using a private derivation, all public and protected members of the superclass become private in the derived class, i.e. a public member of the superclass within the derived class can only be accessed by the members of the derived class.
Finally, with protected derivation all the public parts of the superclass become protected in the derived class. A protected member data, or function, can be accessed only by member functions of the class and certain subclasses of the class, or by friend functions.
Therefore, when deriving a subclass we can reduce the access privileges using private or protected derivation. The access specifier (public, protected, and private) defines the type of access of the members inherited by the derived class from its base class. The access is specified based on which part (public, protected, and private) the declaration of a member data and function is declared. The protected part is used to allow access to the members declared in there only to members of that class and any subclasses.
The derived class has no access to the private members of the base class unless it is declared to be a friend of the base class. Note that friendship is not inherited, i.e. a subclass of a derived class that has been declared a friend of its own base class, is not considered a friend of its base class superclass. Friendship must be explicitly granted from each class to all classes that should have access privileges to that class.
The access level to a member that is inherited from a superclass can be adjusted to the access it has in the superclass, instead of following the rules based on the keyword used in the subclass derivation, by declaring the member in the public, or protected, access section accordingly using the name of the superclass followed by the class scope resolution operator and the name of the member. However, any attempt to change the access status of a base class member is invalid.
/* Example on public, protected and private derivation */
class Point { private: _ double x;_ protected: _ double y;_ public: _ double z;_ _ Point(double,double,double);_ _ };_
Point::Point(double xx=0, double yy=0, double zz=0) { _ x = xx ; y = yy ; z=zz;_ }
// Alternative derivations _ class Voxel : public Point _ // all members retain their status // in the derived class _//class Voxel : protected Point _ // public members become protected // in the derived class //class Voxel : private Point _ // all members become private in the derived class { private: _ int color;
public: _ Point::z; _ // e.g. declared in the public section to // adjust the access of z to public _ Voxel(double x, double y, double z, int col);_ _ void print();_ };
Voxel::Voxel() { _ color=0;_ }
Voxel::Voxel(double x=0, double y=0, double z=0, int col) : Point(x,y,z) { _ color = col;_ }
void Voxel::print() _{ _ // x is always not accessible since x is private _ cout << y ; _ // Accessible but if the derivation is private it // becomes private in the derived class _ cout << z ; _ // Accessible but it becomes whatever is // the derivation in the derived class }
void main() { _ Voxel v(4,7,2,101110101);_ _ v.print();_ }
2. Multiple Inheritance
A class may inherit the member variables and functions from more than one superclasses, i.e. a derived class may have multiple base classes. Then, all superclasses are defined after the colon separated by commas and with the indication of the type of derivation the should be used, i.e. one of the access specifiers: public, protected, or private. There is no limit on the number of the base classes that a derived class inherits from. The base class constructors are invoked in the order that appear in the class derivation list, while the destructors are invoked in the reverse order.
/* Example: on multiple inheritance */
class People { _ public:_ _ char first_name[20];_ _ char last_name[20];_ _ int age;_ };
class Student : public People { _ public:_ _ int student_id;_ };
class Staff { _ private:_ _ int social_sec_num;_ };
class Faculty : public People, protected Staff { _ private:_ _ int num_papers;_ ….. };
If a member function is hidden by another member function we can explicitly specify which one we want to be used by putting the name of the class and two colons in front of the name of the member function.
className::functionName(…)
3. Inheritance: Constructors and Destructors
Although a derived class inherits the member data and functions of its base class, the constructors of the base class are not inherited. Therefore, the derived class needs to provide its own constructors which are called after the base class and member object constructors are called. When a derived class object is created, first a base class constructor is automatically invoked, typically to initialize the member data that correspond to the base class, and then the derived class constructor is called to initialize the additional data members of the derived class. The constructors of the superclasses/subclasses chain are executed in a top-down order, i.e. a base class constructor, if executed, is executed prior to the derived class construction.
A new set of constructors is typically provided for a derived class to make the proper initialization and, if necessary, call the proper superclass constructor with certain arguments. The derived class constructor is used to initialize the members that have been added by the derived class, while the superclass constructor(s) should take care of the corresponding classes data members.
A specific constructor of a superclass can be invoked by providing after the header of the constructor definition a colon followed by the name of the superclass (i.e. the constructor name) and the desired arguments with which it is to be called. This is called**member initialization list** and it can be used to pass arguments to the constructor of the base class. The order of the comma separated member initialization list does not affect the order of construction invocation. The order in which the constructors are invoked is: first, the base class constructor is called and in case of multiple inheritance the order is according to the order that has been used in the class derivation list. Next the constructors of member objects are called in the order in which the members have been defined in the derived class definition. Finally, the derived class constructor is called. A derived class constructor can invoke a constructor only of its direct superclass.
If the derived class has constructors but the base class has not, then the proper derived class constructor is called every time a derived class object is defined.
In the opposite case, i.e. the derived class having no constructors while the base class has, the base class must have a default constructor which is automatically invoked whenever a derived class object is defined.
A derived class constructor needs to explicitly invoke one of the base class constructors, if the base class has constructors but not a default constructor, i.e. a derived class constructor must explicitly invoke one of the base class’ constructors in its header. Alternatively, a default constructor can be provided for the base class. Then, if no base class constructor is explicitly invoked, the default constructor is automatically invoked whenever an object of the derived class is defined.
If a class is used as a base class only to be able to define the subclasses and there is no intention to have objects of that class, the constructors of the base class can be defined as protected, which restricts their access to the derived class (constructors). A class that has no actual objects, instances of itself, is called an abstract base class.
The following example demonstrates how the constructors of the class and its superclass are invoked and how a specific constructor of a superclass can be explicitly called with certain arguments.
/* Example on constructors calling other constructors */
class Point { private: _ double x, y;_ public: _ Point();_ _ Point(double,double);_ _ };_
Point::Point() { _ x = 0 ; y = 0 ;_ }
Point::Point(double xx, double yy) { _ x = xx ; y = yy ;_ }
class Pixel : public Point { private: _ int color;_ public: _ Pixel();_ _ Pixel(double x, double y, int col);_ };
Pixel::Pixel() { _ color=0;_ }
Pixel::Pixel(double x, double y, int col) : Point(x,y) { _ color = col;_ }
void main ( ) { _ Point p(1.7,7.2); _ // The Point::Point(double xx=0, double yy=0) // is called with (1.7,7.2)
Pixel px1; // The default constructors, Point::Point() // and then Pixel::Pixel(), are called
Pixel px2(2.75, 8.23, 111000101); // The Point::Point(double xx, double yy) constructor and then // the Pixel::Pixel(double x, double y, int col) are called }
In the above example the expression: Point(x,y) after the parameters in the header of the Pixel constructor causes the invocation of a specific constructor of the class Point.
Similarly, the derived class, member object class and base class destructors are invoked as soon as the lifetime of an object (of the derived class) reaches its end. In contrast to the constructors that are invoked in a top-down order starting from the base class first, the destructors are called in the reverse order, invoking first the derived class destructor, and then, its member objects constructors, and, finally, the superclass constructor, etc. In addition, although constructors may not be virtual, destructors may be virtual allowing the invocation of the destructor of the class derived for the object pointed to by a pointer (of base class data type). The reverse order is used so as to ensure that the most recently allocated memory is the first to be released.
4. Inheritance: Redefining Member Functions
A member function of a superclass can be redefined (with the same name) in a subclass and, depending on which object invokes it, the corresponding one is invoked. When an object of the derived class is used to invoke a function, the search to find the member function starts from the derived class definition, which results in invocation of the derived class member function if it is available, unless a class scope resolution operator is used to explicitly specify whose class the member function should be called. A specific member function of a superclass can be called by using the name of the superclass followed by the class scope resolution operator before the name of the class to be invoked, since the member of the derived class hides the inherited member. The invocation, i.e. the explicit call of a member function of a superclass, can also be achieved from inside the body of the member function of the subclass. i.e. if the member function of a superclass is hidden by a member function of the subclass, the class scope resolution operator can be used to explicitly specify whose class the member function should be invoked.
The member functions of any superclass and subclass can be overloaded as any other set of functions in a certain scope. Note that the member functions of a base class and the member functions of its derived class do not all together make up a set of overloaded member functions, because the former are considered to be in the base class scope, while the latter in the derived class scope.
The previous example has been extended, as shown below, to show how to explicitly call a superclass member function.
/* Example on redefining and invoking member functions */
class Point { private: _ double x, y;_ public: _ Point();_ _ Point(double,double);_ _ void print();_ };
Point::Point() { _ x = 0 ; y = 0 ;_ }
Point::Point(double xx, double yy) { _ x = xx ; y = yy ;_ }
void Point::print() { _ cout << " (x,y) = (" << x << “,” << y << “) " ;_ }
class Pixel : public Point { private: _ int color;_ public: _ Pixel();_ _ Pixel(double x, double y, int col);_ _ void print();_ };
Pixel::Pixel() { _ color=0;_ }
Pixel::Pixel(double x, double y, int col) : Point(x,y) { _ color = col;_ }
void Pixel::print() { _ this -> Point::print(); _ // the member function print() // of class Point is called _ cout << " color = " << color;_ }
int main ( ) { _ Pixel px1;_ _ cout << “\n Pixel px1:” ;_ _ px1**.print(); ** _ // the member function print() // of class Pixel is called _ cout << endl;_
Pixel px2(2.75, 8.23, 111000101); _ cout << " Pixel px2:” ;_ _ px2.**Point::print(); ** _ // the member function print() // of class Point is called
cout << endl; _ return EXIT_SUCCESS;_ }
Output
Pixel px1: (x,y) = (0,0) color = 0 _ Pixel px2: (x,y) = (2.75,8.23)_
5. Virtual Functions and Polymorphism
Virtual functions allow dynamic (or late) binding, which means that the selection of which function to call is done during execution, rather than during compilation. This provides the flexibility to perform the same kind of action on different types of objects as long as they are all instances of classes of or derived from, a superclass whoce function is defined as virtual. The selection of which of the virtual functions to invoke is done at run-time. In contrast the resolution of a non-virtual function is done by the compiler during compilation and the process is called static binding. A non-virtual member function is invoked using implicitly the pointer this which is of a certain data type. If a pointer to a base class object is used to invoke the function, even if the pointer stores the address of a derived object, the base class member function will be called.
Polymorphism is the ability of having a member function, or an operator overloading function, that behave differently on different types of data. It is the ability of dynamic (i.e. run-time) binding of a pointer of a base class to a method, based on what is stored in the memory pointed to by the pointer and not the data type of the pointer. This is possible by the ability of a pointer to a base class to point not only to base class objects, but also to any object of any of its subclasses. In contrast, a pointer to a derived class object cannot point to a base class object unless explicit casting is used. The decision of which function is invoked is delayed until the run-time, instead of being made during compilation as in non-virtual functions. However, polymorphism which is a major characteristic of object-oriented programming can be used only when pointers (or references) are used and not actual objects.
A function is defined as virtual by preceding the return data type at the member function declaration of the base class with the keyword virtual, e.g. virtual void print(void); Declaring a member function of a class as virtual, the corresponding member functions in all that class’s subclasses are automatically considered to be virtual. However, the keyword virtual, although optional, is typically used also in the derived classes at the corresponding virtual function declarations to clarify the nature of the function. The keyword virtual should be used only in the function declarations and not at external definitions of the defined functions. The keyword virtual indicates that the selection of which function to invoke should be delayed until run time and be based on the data type of the object that is pointed to by the pointer that invoked the member function.
A derived class does not need to redefine a member function that has been indicated as virtual in its base class. In that case it inherits the member function from the base class. A virtual function that is redefined in a derived class must have the same signature as the base class function. Otherwise it will simply hide the base class function and compile-, rather than run-, time binding will be used. It is wrong to provide in a derived class a member function with the same signature as the virtual function declared in the base class but with different return data type. The only exception is to have as return data type the address or reference of a derived class object instead of a base class object.
/* Simple example on virtual functions */
#include <iostream.h> #include <stdlib.h>
class MyBase { public:
void print() { cout << “\n Printing through the base class: MyBase” << endl; }
virtual void print(int i) { cout << “\n Printing through the base class: MyBase:” << " i = " << i << endl; } };
class MyDerived : public MyBase { public:
void print() { cout << “\n Printing through the derived class: MyDerived” << endl; }
virtual void print(int i) { cout << “\n Printing through the derived class: MyDerived:” << " i = " << i << endl; } };
int main(void) { MyBase b; MyDerived d;
b.print(); d.print();
MyBase *pb=&d; MyDerived *pd=&d;
pb -> print(); pd -> print();
pb -> print(1); pd -> print(2);
return EXIT_SUCCESS; }
Output
Printing through the base class: MyBase Printing through the derived class: MyDerived
Printing through the base class: MyBase Printing through the derived class: MyDerived
Printing through the derived class: MyDerived: i = 1 Printing through the derived class: MyDerived: i = 2
A pointer to an object of a certain class can point not only to any object of that class but also to any object of that class’ subclasses. Therefore, we may have an array of pointers to a base class which are used to point to objects of any of the base class’ subclasses. Having defined a virtual function, a pointer to the base class can be used to point to an object of the base or any of its subclasses object, and the decision which member function to invoke depends on the current contents of the pointer, i.e. to which class object it points to, rather than its (the pointer’s) data type. In contrast a pointer to a derived class cannot point to an object of the base class unless it is an explicit cast is used. If a pointer to a base class stores the address of a derived class object and both base and derived class have a non-virtual function, or data, with the same name, the base class member function, or data, is selected during static binding.
If the virtual member function is never expected to be used with an object of the base class, it can be specified as pure virtual function by providing instead of the body of the function an assignment to 0, e.g. virtual void print(void) = 0; Then, if the function is called run-time error will occur, since it is not intended to be invoked, but it is only provided to allow derived functions to define the actual functions, which will be called based on the contents of the memory pointed by the pointer that invokes the virtual function.
The class scope resolution operator may be used to disable the virtual mechanism and explicitly invoke the member function of a certain class. Such explicit invocation is resolved at compile, rather than at run, time. Declaring a pure virtual function results in no consideration of it during the virtual mechanism resolution, i.e. it cannot be invoked through the virtual mechanism, and specifies the class to be an abstract base class. However, a definition for a pure virtual function may be provided (i.e. the pure virtual function may be defined) and the function may be statically invoked (i.e during compile time).
If a pointer to a class is used to point to objects of subclasses of that class, the destructor of the class must be declared as virtual, to ensure that proper deallocations of memory occur when an object is deleted.
Although a constructor may not be declared as virtual, a destructor can be a virtual function. The reason for having a destructor declared as virtual is that if the dynamically allocated memory for a derived class is assigned to a pointer to a base class object, then the base class destructor will be called instead of the derived class destructor resulting in a memory leak. Therefore, it is good to declare as virtual the destructor of the base class if any virtual functions are used and especially when dynamic memory allocation is used. When the destructor is virtual then the order of destructor invocations starts with the derived class and continues with the destructors of its superclasses. Also a virtual function may not be static, since a virtual member function needs to be associated with a particular object of a class rather than the class as a whole.
Another use of the keyword virtual is to declare a base class as virtual. This is useful when a derived class inherits from multiple (direct) superclasses that happen to have already inherited from a common superclass higher in the class hierarchy. Then, the derived class inherits multiple times from the same (the common class to its superclasses) base class. To avoid this we can use the keyword virtual at the derivation of its superclasses, as shown below:
class MyBase { …….}; class MySuper1: public virtual MyBase { …….}; class MySuper2: public virtual MyBase { …….}; class MyDerived: public MySuper1, public MySuper2 { …….};
The use of virtual base class (as above), which is called virtual inheritance, allows the inheritance and sharing of a single base class sub-object instead of having unecessary multible copies of the base class whenever the base class occurs in the derivation hierarchy. Virtual inheritance avoids duplications of the base class sub-objects and ambiguities that rise with such duplicates. However, there is a performance and complexity impact when using virtual inheritance.
6. Abstract Classes
A class that is used as a general base class to derive other classes, without any instances of that class ever being created, is called an abstract class. A class can be made abstract by declaring one or more of its member functions of the class as a pure virtual function(s). This is achieved by setting to zero the declaration of the function. then, the member function will not be considered when a function of the same signature is called, rather one of the derived class functions will be called.
e.g.: virtual void print(void) = 0;
An abstract class needs to have a derived class, i.e. it is invalid to define an object of an abstract class. A set of functions are typically defined as**pure virtual functions** in the base class to provide a common public interface for any current or future derived classes. A member function of an abstract base class is not ever intended to be called, as no instance of the abstract base class is ever anticipated. An attempt to define an object of an abstract base class results in a compile-time error.
7. File Streams
Although the easiest way to read from a file or to write to a file is using redirection, the direct way to open a file and read from or write to it is using the file-handling library of C++. The declarations of the library are in the header file fstream.h, which must be included in a program so as to be able to use input and output streams to a file.
To create an input stream, i.e. open an input file for reading, the following definition should be used, which instantiates an ifstream object:
ifstream inputStreamName (“fileName”);
Then, the inputStreamName can be used instead of the input operator cin to read from a file named fileName, instead from the standard input.
Similarly to write to a file, i.e open an output file for writing to it, the following definition should be used, which instantiates an ofstream object:
ofstream outputStreamName (“fileName”);
Then, the outputStreamName can be used instead of the output operator cout to write to a file named fileName instead to the standard output.
After using the ifstream, or ofstream, object to read from, or write to, a file, the file should be closed when access to it is no longer needed. A file can be closed using the member function close(), i.e. inputStreamName.close(); or outputStreamName.close();
EOF (end-of-file), which is a constant defined in the iostream.h header file can be used to read data until the end of file (EOF) is reached, by checking whether what was read is equal to EOF (machine dependent). EOF is entered in Unix workstations using <Control-d>.
You may optionally take a look to the following topics, which were not covered in the course.
_A1. Namespaces_
When several different libraries are used in a program there may be conflicts among identical global names of variables and functions. Namespace definitions can be used to reduce this problem by enclosing the source code (declarations and definitions) in certain namespaces. Each namespace has an associated namespace scope and contains the namespace members, which can be variables, class definitions, functions, etc., i.e. anything that could have been declared in the global scope. A namespace is defined using the keyword namespace followed by the name of the namespace and the declarations and definitions enclosed in curly braces. The definition of a namespace does not need to be contiguous, but it can be provided in several different points, even in different files.
To refer to a namespace member the qualified name notation indicating the namespace is required. The name of the namespace should be provided followed by the name of the member (variable or function) that is to be accessed. In addition, to be able to refer to a member of a namespace it is required to have earlier declared the namespace. Typically, the namespace declaration is provided in a header file which is included everywhere the namespace needs to be used. Since a member variable should be defined only once, the keyword extern should be used in the declaration of the member variables of a namespace. A member of the global namespace can be referred to by using the scope resolution operator (::) without a namespace preceding it. Therefore, it can be used to access global member that are hidden by local ones. Nested namespaces, i.e. defining a namespace within another namespace, are allowed. In that case more than one namespace names and scope resolution operators need to be used to specify the namespace scope.
Namespaces is a recent feature of C++ and not all compilers conform to the corresponding to namespaces C++ standard.
To avoid the need for typing of long names to specify the namespace scope there are two mechanisms that can be used the namespace aliases and the**using directives**. However, not all compilers support these mechanisms according to the C++ standard.
Using a namespace alias we can associate a simpler name to an existing (and often long) namespace that we need to use. In particular, we can declare an alias (e.g. MYLIB) for a namespace with a long name (e.g. GraphicsDrawingFunctions) using the following declaration:
namespace MYLIB = GraphicsDrawingFunctions;
Then, to invoke the member function draw of the namespace we can use MYLIB::draw(), instead of GraphicsDrawingFunctions::draw().
The using directive can be used to access members of a namespace without the need to explicitly refer to the specific namespace, i.e. it provides unqualified access to the namespace. A namespace can become visible with a declaration in which the keyword using is used followed by the name of the namespace and the name of a member of the namespace we want to access. If no specific member is given then all members of the namespace become visible, i.e. are considered in the scope in the scope in which the using declaration is used. The next example demonstrates the use of a namespace named NameSpaceTest2 and its declaration (in the file test2.h) and definition (in the file test2.C).
test2.h
namespace NameSpaceTest2 { _ extern int x;_ _ extern void print(double d);_ }
test2.C
#include <iostream.h>
namespace NameSpaceTest2 { _ int x = 22;_ }
namespace NameSpaceTest2 { _ void print(double d)_ _ {_ _ cout <<"\n printing through NStest2::print(double d) “_ _ << " d = " << d << endl;_ _ }_ }
test1.C
#include <iostream.h> #include <stdlib.h> #include “test2.h”
int x = 11;
void print(int i) { _ cout <<”\n printing through ::print(int i): i = " << i << endl;_ }
using NameSpaceTest2::print;
main() { _ int x = 77;_
cout << “\n x = " << x << endl; _ cout << " ::x = " << ::x << endl;_ _ cout << " NameSpaceTest2::x = " << **NameSpaceTest2::**x << endl;_
print(3); _ **NameSpaceTest2::**print(4);_
print(4.11); _ **NameSpaceTest2::**print(4.22);_
return EXIT_SUCCESS; }
Output
x = 77 _ ::x = 11_ _ NameSpaceTest2::x = 11_
printing through ::print(int i): i = 3 _ printing through NStest2::print(double d) d = 4_
printing through NStest2::print(double d) d = 4.11 _ printing through NStest2::print(double d) d = 4.22_
A namespace without defining its name, called unnamed namespace, can be used to define members (functions, classes, and variables) only in a portion of a program without access from other files. An unnamed namespace is defined using the keyword namespace followed by curly braces where all definitions are located. Its members are visible only in that file (scope limited in that file) but have extent until the termination of the program. An unnamed namespace is equivalent to a static global member that is defined and used in one file but cannot be accessed from any other file although its extent lasts until the end of the program.
A special namespace named std has been used to declare and define all components of the C++ standard library. However, many compilers do not support this feature. All members of this namespace can become visible with the following statement: using namespace std;
_A.2. Assertion_
Assertions are used in a program as conditions that must be true in order to ensure correctness of the program. They can be used as preconditions, postconditions, and invariants, to verify that a condition is true, e.g. in the entrance, exit, or anywhere within a function.
To use assertions the header file assert.h must be included and the preprocessor macro assert() can be used to check whether a condition is true. If the assertion fails the program terminates providing information about the error that occurred.
The following example demonstrates how assertions can be used to check whether a file has properly opened for reading. In this case, it was attempted to open a nonexistent file and the assert which checks whether the pointer to a file is not equal to null fails resulting in a program termination.
/* Example on the use of assertions */
#include <iostream.h> #include <fstream.h> #include <stdlib.h> #include <assert.h>
int main() { char str1[]=“existing”; system(“ls>existing”); // Using system() an OS command can be executed ifstream ifp1 (str1); assert(ifp1); // line 14 cout << “\n File " << str1 << " has been opened properly” << endl;
char str2[]=“nonexisting”; ifstream ifp2 (str2); assert(ifp2); // line 19 cout << “\n File " << str2 << " has been opened properly” << endl;
return EXIT_SUCCESS; }
Output
assertions.C:19: failed assertion ‘ifp2’ Abort
A.3. C++ Standard Library String Class
A string class, that has several convenient and object-oriented capabilities, is provided by the C++ Standard library. In order to use it, the string header file needs to be included, the following example shows how an object of this class can be defined and used, and how it can be combined with the more traditional C Standard library string which is represented as an array of characters.
/* Example on the C++ standard library string class */
#include <iostream.h> #include <iomanip.h> #include <cstring> #include <string>
int main(void) { string str1(“Testing”), str2; string str3(str1); char str4[] = “MIT”; const char *str5 = “”;
cout << “\n str1: " << setw(20) << str1 << “\t size = " << str1.size(); str1.empty() ? cout << “\t (empty)” << endl : cout << endl ;
cout << " str2: " << setw(30) << str2 << “\t size = " << str2.size(); str2.empty() ? cout << “\t (empty)” << endl : cout << endl ;
cout << " str3: " << setw(20) << str3 << “\t size = " << str3.size(); str3.empty() ? cout << “\t (empty)” << endl : cout << endl ;
cout << " str4: " << setw(20) << str4 << “\t size = " << strlen(str4); strlen(str4) ? cout << endl : cout << “\t (empty)” << endl ;
cout << " str5: " << setw(20) << str5 << “\t size = " << strlen(str5); strlen(str5) ? cout << endl : cout << “\t (empty)” << endl ;
if(str1==str3) str2 = str1 + str4 ; str2 += str3 ; str2[10] = ’t’ ;
cout << " str2: " << setw(15) << str2 << “\t size = " << str2.size(); str2.empty() ? cout << “\t (empty)” << endl : cout << endl ;
return 1; }
Output
str1: Testing size = 7 str2: size = 0 (empty) str3: Testing size = 7 str4: MIT size = 3 str5: size = 0 (empty) str2: TestingMITtesting size = 17< /EM >
A.4. Linkage Specifications: extern “C”
An existing compiled C function may be incorporated in a C++ program and used if a declaration of the function with extern “C” preceding its return data type is provided. e.g.: extern “C” double fun(int, double);
Command line arguments
To be able to use command-line arguments, i.e. provide arguments while executing a program main must be declared as: main(int argc, char *argv[ ]){……..}, where argc is the number of command-line arguments and argv is an array of strings each of them corresponding to a command-line argument.
|
common_crawl_ocw.mit.edu_77
|
These notes were prepared by Petros Komodromos.
Topics
- Function templates
- Class templates
- Sorting and searching algorithms
- Insertion sort
- Selection sort
- Shellsort
- Quicksort
- Linear search
- Binary search
1. Function Templates
The template mechanism of C++ allows the development of general functions and classes without the need to know the data type of the variables used during implementation. Templates allow the development of type-independent source code. Suppose that we want to find the maximum of a set of numbers that may be of int, float, or double data type, the algorithm is the same regardless of the particular data type. Using templates a single function, a function template, can be written that can selectively be instantiated and work considering a specific data type. Similarly, class templates can provide generic descriptions of classes without the need to specify the data types used.
A template line, called template parameter list, precedes a template function declaration or definition specifying the parameters that are to be used as data types in the function. One or more data types are parameterized, allowing the instantiation of the function with varying data types specified to the corresponding parameters. The template parameter list begins with the keyword template followed by comma separated parameters enclosed in <> brackets. Two types of parameters can be specified as template parameters: template type parameters, that consist of the keyword class, or typename, and an identifier; and template nontype parameters which are essentially ordinary parameter declarations that are used to represent a constant in the template definition.
The template parameter list is followed by a function template declaration or definition. The only difference of the function definition of a template function from an ordinary function is the presence and use of the template type parameters as data types. The template type parameters can be used in the same way as any other built-in or user-defined data type, in the template declaration or definition that follows the template line. Similarly, the template nontype parameters can be used as constant values. If there is a name conflict in the function template declaration or definition with a name used in the global scope, the latter is hidden. Although the name of a template parameter can be used in several function template declarations and definitions, it is not allowable to use the name of a template parameter more than once in the same template parameter list. The template parameter names used in template declarations and the actual definition of a function template may be different. To specify a function template as inline or extern, the corresponding keyword must be used after the template parameter list, i.e before the return type of the function.
A template function is a specification for an actual function that is created when the template is instantiated with a certain data type. During template instantiation, the parameters of a template are replaced by the actual data types, and the actual code for an individual function (with the data types defined) is created by the compiler. Any data type dependent errors are detected only during instantiation and not at the template definition. Prior to instantiation a function is not defined, since the function template is simply a specification on how function should be created during instantiation. A function template is instantiated either when it is invoked, or when its address is taken to be assigned to a pointer to a function. Then, according to the arguments that are used for the function call, the data types that correspond to the template type parameters, and the values corresponding to the template nontype parameters are determined. This process is called template argument deduction and it is based on the examination of the function arguments. Note, that the return data type of the function template is not considered in the template argument deduction. In addition, the template arguments can be explicitly specified, instead of relying on the template argument deduction mechanism. Template arguments can be explicitly specified by a comma separated list of template arguments in a <> brackets between the name of the function template and its, enclosed in parentheses, argument list.
The following simple example shows how a template function can be used to determine the maximum element of a vector of numbers that can be of int or double data type.
/* Simple example of using function templates */
template<typename MyType>
MyType findMax(MyType vect[], int n)
{
MyType maximum = vect[0];for(int i=1; i<n ;i++)
if(vect[i]>maximum)
maximum = vect[i];return maximum;
}_template<class MyType, int SIZE> _ // keywords class and typename are equivalent
inline MyType findMin(MyType (&vect)[SIZE])
{
MyType minimum = vect[0];for(int i=1; i<SIZE ;i++)
if(vect[i]<minimum)
minimum = vect[i];return minimum;
}int main(void)
{
int nx = 10, x[] = {3, -78, 12, 52, 17, -53, 2, 49, -9, 43}, ny = 7;
double y[] = {39.2, -72.8, 5.2, 14.7, -15.3, 41.9, -92.3};cout << “\n Maximum element of x = " << findMax(x,nx) << endl;
cout << " Maximum element of y = " << findMax(y,ny) << endl;cout << “\n Minimum element of x = " << findMin(x) << endl;
cout << " Minimum element of y = " << findMin(y) << endl;return EXIT_SUCCESS;
}_Output
_Maximum element of x = 52
Maximum element of y = 41.9Minimum element of x = -78
Minimum element of y = -92.3
The definition must be visible at the point of instantiation, e.g. when a function template is called. In the above simple example the definition of the function template and the code in which it was instantiated appeared in the same file.
For larger programs the function template definitions are typically provided in a header file that is included in every file in which the function template is used, similarly as when using inline functions.
2. Class Templates
Class templates can be used to develop a generic class prototype (specification) that can be instantiated with different data types. This is very useful when the same kind of class is used with different data types for individual members of the class. Parameterized types are used as data types, as in the function templates, and then a class can be instantiated, i.e. constructed and used, by providing arguments for the parameters of the class template. A class template is a specification of how a class should be built (i.e. instantiated) given the data type or values of its parameters.
A class template is defined by a line which defines the parameters, using the keyword template followed by the parameters (template type and nontype parameters) enclosed in <> brackets known as template parameter list. Each template type parameter is preceded by the keyword class or typename, and it is replaced by the corresponding argument when the class is instantiated. The name of a template type parameter represents a built-in or user-defined data type that would be defined during the class instantiation. A template nontype parameter is like a an ordinary (function-parameter) declaration, and represents a constant in the class template definition, i.e. it should be possible to be determined at compilation time. A parameter can be specified only once in the template parameter list and should not have the same name with a member of the class. The name of a template parameter can be reused in other template declarations or definitions, and also can be different in declarations or definitions of the same class template. If the name of a template parameter is the same as a global variable then the latter is hidden by the parameter.
In addition, the class template parameters (both type and nontype) can have default arguments that are used, during class template instantiation, if arguments are not provided. Because the provided arguments are used starting from the far left parameter, default arguments should be provided for the rightmost parameters.
Then, the declaration or definition of the class follows, using the defined type parameters as data types. The definition of a class template as well as the definitions of externally defined member functions are similar to ordinary class and member function definitions with the only difference being, besides the template parameter list at their beginning, the use of the template parameters. To define externally defined member functions of a class template, the template parameter list must precede the definition to make the parameters available to the function. In addition, the template parameters should also be used in <> brackets list before the scope resolution operator to indicate the actual name of the specific class which is a certain instantiation of the class template. A member function of a class template is itself a function template which is instantiated only whenever the function is invoked or its address is taken (and not when the class template is instantiated), using the corresponding data types used for the associated class object.
To create a particular class and define an object of that class the name of the template class is used followed by a comma-separated list of either data types that are used as arguments to the type parameters, i.e. specifying the data types to be used for the class creation, or arguments that are passed as values to the non-type parameters of the class template. This is process is called template substitution. Each different instantiation of a class template is considered a different class which is identified by the name of the class template followed a comma separated list of parameters enclosed in <> brackets that are used for the instantiation. In contrast to function templates, where some non-type parameters may be deduced from the way they are used, e.g. the size of an array, the template parameters must be either provided as arguments or have default values that can be used.
Sometimes ambiguity may arise during instantiation of a template class, e.g. due to already existing member functions using certain data types which come in conflict with generated member functions using the specified data type which may happen to be the same.
The following simple example demonstrates the definition and use of a class template.
myTemplate.h
?
template <class myTypeX, typename myTypeY>
class Point; template <typename myTypeX, class myTypeY>
class Point
{
private:
myTypeX x;
myTypeY y;
Point();public:
Point(myTypeX x, myTypeY y)
{
this->x = x;
Point::y = y;
}void print(void);
};template <class myTypeX, class myTypeY>
void Point<myTypeX,myTypeY>::print(void)
{
cout << " (x,y) = (” << x << “,” << y << “) " ;
}
myTemplate.C
#include <iostream.h>
#include <stdlib.h>
#include “myTemplate.h”int main ( )
{
Point<int,double> p1(3,9.25);
cout << “\n p1 = “;
p1.print();Point<double,int> p2(3.74,9);
cout << “\n p2 = “;
p2.print();cout << endl;
return EXIT_SUCCESS;
}
3. Sorting and Searching Algorithms
Sorting and searching are fundamental operations in computation and information technology.
Because searching a sorted array is much more efficient than searching an unsorted one, sorting is used to facilitate searching. In many programs the running time is determined by the time required for sorting. Therefore, it is important to implement a fast sorting algorithm.
For all algorithms, presented below, assume that the N data (elements) are stored in an array named A.
4. Insertion Sort
This is an elementary algorithm with nested loops which cause an O(N2) time.
Each element, starting from the element A[0], is considered one at a time and it is positioned in the proper ordered among those who have already been considered.
Example:
5 . Selection Sort
This is another elementary sorting algorithm with an O(N2) time.
The element with the smallest value in the array is identified and placed in the first position. Then, the element with the smallest value among the remaining N-1 elements is selected and placed in the first position of the N-1 subarray. Continuing this procedure the array is sorted as shown by the following example.
Example:
6. Shellsort
Shellsort is an extension of insertion sort which can increase its efficiency. It allows exchanges of non adjacent elements. Although in some rare cases an O(N^2) time is required, the required time is usually O(N3/2).
First, a gap size is selected by dividing the number of the elements by 2. Then, the corresponding every “gap-size” elements are sorted. Next, the “gap-size” is divided by 2 and repeat the sorting of the elements at every “gap-size”. Finally, the “gap-size” becomes equal to 1 and the entire array is sorted.
Example:
7. Quicksort
Quicksort is a very fast sorting algorithm which has O(N.lgN) average times. Its worst case performance is O(N2), but can be avoided with certain techniques. It is a “divide and conquer” algorithm for sorting. It partitions the data into two parts and then sort them independently.
It uses in place sorting and a simple recursive structure.
An element of the array, called pivot, is picked.
Then, one index start from each side of the array moving towards each other. The higher index is decreased until an element with a value smaller than the value of the pivot is found. Similarly the lower index is increased until an element with a value higher than the value of the pivot is found. If the two indices are different, the two corresponding elements are out of order and need to be exchanged with each other.
The above step is repeated from the point where the process was interrupted.
If the two indices are the same, then if the value of the selected element is less than that of the pivot the selected element and the pivot are exchanged. Otherwise, no exchange should occur.
At this point the algorithm has grouped the elements of the array into two subarrays. One has all elements smaller than or equal to the pivot and the other all the elements larger or equal to the pivot.
Then, the algorithm is applied recursively to each subarray until the number of the elements of a subarray is equal to 0 or 1.
Example:
Note
pivot - left index - right index -both indices
Black cells are used to separate the elements of the array, i.e. they are not elements of the array.
The above array is of size N=8.
8. Linear Search
Linear search is the simplest searching algorithm and can be applied to unsorted data. The search proceeds in sequence searching for a certain element. In the worst case, which is the case of an unsuccessful search, it takes N iterations. On average it takes N/2 iterations.
Example:
Search for element with key value equal to 33
9. Binary Search
Binary search can be used on sorted data. It splits the data in half, determines in which half the desired element must be located (if it exists in the data set). Then, recursively repeats the cutting in half of the elements and keeps selecting the part in which he desired data element may be.
This algorithm requires O(lgN) computational time.
Example:
Search for element with key value equal to 33
|
common_crawl_ocw.mit.edu_78
|
These notes were prepared by Petros Komodromos.
Topics
- Introduction to Java®
- Compiling and running a Java® application and a Java® applet
- Data types
- Variables, declarations, initializations, assignments
- Operators, precedence, associativity, type conversions, and mixed expressions
- Control structures
- Comments
- Arrays
- Classes and Objects
- Constructors
- Initializers
- Member data and functions
- Function overloading
1. Introduction to Java®
Java® is an Object-Oriented Programming (OOP) language, which is similar to C++ but with certain characteristics that allow the simple development of portable programs with graphics and graphical user interfaces. The provided classes allow very simple and efficient development of complicated programs that can be executed in any machine irrespectively of the operating system, as long as it supports Java®. You can read more about “what is Java®” in the relevant paragraph of the Java® Tutorial, provided by Sun.
The portability of Java® programs is based on the Java® Virtual Machine (Java® VM) and the intermediate compilation into bytecode. The bytecode can, then, be interpreted by the Java® VM, which translates the bytecode instructions into machine instructions that your computer can understand and execute.
The Java® platform consists essentially by the Java® VM, which takes care of the compilation and interpretation issues (e.g. portability), and by the Java® API, which provides a large collection of software components that can be directly used by a Java® programmer. You can read more about it in the on-line paper “The Java® Platform”, by Douglas Kramer.
The Application Programming Interface (API)provides several classes that can be used to efficiently write programs with graphics content and graphical user interfaces. The latter can be achieved with C++ only by combining it with graphic libraries such as Open Inventor or OpenGL, and with toolkit libraries such as TCL and TK.
In addition, Java® facilitates the development of programs that deal with networking, security issues, databases, 3D graphics, and many other issues that a typical high level language, such as C++, does not provide.
The following are good references to learn Java®:
- The Java® Tutorial. Mary Campione and Kathy Walrath. 2nd edition.
- Core Java®. Gary Cornell and Cay Horstmann. 2nd edition.
- The Java® programming language. Ken Arnold and James Gosling. 2nd edition.
- Java®: How to program. Deitel & Deitel. 2nd edition.
- Java_®_ 2 Platform API, v 1.3
If you are interested to read more about Java® you can find more information in the following on-line paper by James Gosling and Henry McGilton:
- “The Java® Language Environment - A White Paper.” May 1996.
You can find more information about Java® in the Sun’s Java® page.
2. Compiling/Running a Java® Application/Applet
Java® is a pure Object Oriented Programming (OOP) language. Any Java® program is built from classes. C++ can be used as an OOP language but not necessarily since someone can use it to develop non-object oriented programs.
The simplest, probably, Java® program is a Java® application which prints a message. A Java® application is a Java® program that can be executed independently without the need of any browser.
The following java program is written in a file named welcome.java":
welcome.java
class Welcome
{
public static void main(String args[])
{
System.out.println(“Welcome to 1.124”);
}
}
Because global functions are not allowable in Java® we need to provide the main() function in a class. In addition, we need to make it public so as to be accessible, and static so as to be a class function rather than being a function associated with a certain instance of the class. The main function must have a single parameter of type String[] and must return nothing (i.e. being void). Any class can have its own main function.
To compile a java program the java compiler javac is used as follows:
javac welcome.java
This command generates the bytecode for the classes that are defined in the Java® program. In this case, it generates the file Welcome.class which contains the bytecode for the class Welcome. The name of the file with the bytecode is constructed from the name of the class plus the extension class. The bytecode is instructions for the Java® Virtual Machine. These instructions are the same for any type of machine or operating system. To run the program, the Java® interpreter needs to be used to interpret the Java® bytecode into instructions of the specific machine on which the program is running.
The command to run a Java® program is as follows, using the java interpreter:
java Welcome
Then, the class Welcome is loaded and interpreted printing out the following:
Welcome to 1.124
Some programming languages, such as Basic, also use an interpreter, which makes the development and the debugging of the programs faster and more efficient. However, most high level languages use a compiler and not an interpreter, while Java® uses both. The bytecode files can, in general, run in any machine with any operating system, as long as the proper interpreter is available. However, the execution of such interpreted programs is relatively slow.
Many other programming languages, such as C/C++, are using a compiler, which translates the source code files into machine instructions. Although the execution of compiled programs is much faster, the executable cannot run on a machine with a different architecture, since it recognizes a different set of instructions.
Java® combines both a compiler and an interpreter. The compiler (javac) compiles the Java® source code files into bytecode, and the interpreter (java) is used every time the program is executed to translate the bytecode (i.e. the Virtual Machine instructions) to the specific machine instructions and execute them. This way Java® programs can run on any type of computer and under any operating system assuming that the Java® interpreter is available and can be used on that machine. However, Java® programs are, in general, slower than compiled programs (e.g. C++ executable programs) since interpretation takes place before execution.
Java® Development Kit (JDK)also provides an appletviewer to check and run applets, a debugger named jdb to debug Java® programs, and several other tools that help in the development and documentation of Java® programs.
Java® applications are stand alone Java® programs that can be executed without the need of a browser, while the Java® applets run within a Java® compatible browser. The execution of any Java® application begins with the main method of the corresponding class, i.e.. the class with which the Java® interpreter was invoked. The above example is a Java® application, while the following is a simple Java® applet.
A Java® applet is based on a set of conventions and functionalities that are inherited that allows it to be executed in an appletviewer or any Java® enabled browser. The source code for the applet is provided below, followed by the html file that needs to be used so as to load the class from a Java® enabled browser, or using the appletviewer provided with the Java® Development Kit (JDK).
An applet inherits (extending) the Applet class provided by the java.applet package of the Java® Core API. Here, the AWT Applet is used, mostly for historical reasons. Today, the Swing JApplet is preffered in most cases. In the following example the function paint(), which is inherited, is overridden by the new definition. This function is used to draw the applet in the browser, or the appletviewer.
myApplet.java:
import java.applet.Applet;
import java.awt.Graphics;public class myApplet extends Applet
{
public void paint(Graphics g)
{
g.drawString(“Welcome to 1.124”, 50, 35);
}
}
The above program is compiled using the javac compiler, i.e. executing the command:
javac myapplet.java
The resulting file with the bytecode is the myApplet.class which takes its name from the name of the class. This file can be loaded and interpreted in any Java® enabled browser, or the appletviewer, using an html file.
The html code is used to specify at least the location and the dimensions of the applet to be loaded.
myApplet.html:
<HTML>
<HEAD>
<TITLE> A simple program to run a Java Applet</TITLE>
</HEAD><BODY>
Here is the class myApplet is loaded:
<APPLET CODE=“myApplet.class” WIDTH=150 HEIGHT=100 align=center>
</APPLET>
</BODY></HTML>
It is possible to write a Java® program that can work both as an applet and as an application.
You can, also, find detailed instructions on how to write your first Java® program at the Lesson: “Your First Cup of Java®” of the on line Java® Tutorial, which is provided by SUN.
3. Data Types
Java® has two kinds of data types, primitive and reference data types. Primitive data type variables contain a corresponding of the data type value, while reference data type variables, such as arrays and classes, contain a reference to the actual set of values.
The following are the primitive (or built-in) data types:
- boolean (boolean value, true or false)
- char (2-byte, character - Unicode)
- byte (1-byte, signed integer)
- short (2-byte, signed short integer)
- int (4-byte, signed integer)
- long (8-byte, signed long integer)
- float (4-byte, floating point)
- double (8-byte, double precision floating point)
It is allowable to assign the value of a primitive data type variable from one type to another without an explicit cast if the variable that the value is assigned is on the right of the following order list.
byte < short < int < long < float < double
A char can be promoted to an int, long, float or double. However_,_ a boolean cannot be converted to any other primitive data type, since boolean values are not considered to be numbers. The following table presents all the allowable promotions:
An assignment from a “higher” order to a “lower” is allowed only when an explicit casting is used, because information may be lost from the conversion, e.g.: int x = (int) 4.75;
Each of the primitive data types has a corresponding class, called wrapper class, defined in the java.lang package. e.g. a double primitive data type has the corresponding class Double.
4. Variables, Declarations, Definitions, Initializations, and Assignments
The data type of every variable has to be specified in a definition, by preceding the name of the variable that is defined with a data type. A data type can be one of the built-in (primitive) data types, one of the data types defined in the provided Java® packages, or the user defined data type. The name of a variable must be a legal identifier and it should not be the same with any other variable that is defined in the same scope.
The scope of a variable is where the variable is accessible. It is specified by the location where the variable is defined. There are 4 different scope categories:
- local variables: variables defined anywhere in a function
- member variables: data members of a class (static or non static)
- function parameters: parameters of functions in which values are passed when invoking the function
- exception-handler parameters: parameters of exception-handlers in which values are passed when the exception handler is called.
Local variables are undefined prior to initialization. Therefore, a local variable must be either initialized or assigned a value before being used. The scope of a local variable is from the point where it has been defined up to the end of the code block in which it has been defined. The memory allocated for a local variable is automatically be reclaimed when control goes out of its scope, upon exiting the function in which it is defined.
The scope of a function or an exception-handler parameter is the entire corresponding function.
A named constant can be defined using the keywords static and final. Static indicates that it is a class variable, while final indicates that its value cannot be changed after it has been initialized.
e.g.: static final double PI = 3.1414926
5. Operators, Precedence, Associativity, Type Conversions, and Mixed Expressions
Java® has the following categories of operators. Some of them can be used as either unary or binary. Also in Java® the corresponding from the C++ conditional operator is a tertiary operator, i.e. having 3 operands.
- arithmetic: + , - , *, / , %
- shorthand arithmetic: ++ , –
- relational: > , < , >= , <= , == , !=, instanceof
- conditional: && , || , ! , &, |
- assignment: =
- shorthand assignment: += , -= , *= , /= , %=, etc.
- bitwise and logical operators: » ,<< , etc.
- conditional operator: (logical Test) ? trueStatement : falseStatement
The order in which the operations in expressions are performed is decided according to the precedence and associativity rules, which are the same as in C++. According to any precedence table, the operators of higher precedence are evaluated first, before operators with lower precedence.
The following precedence table (copied from the Java® Tutorial) lists the operators according to their precedence order. Higher precedence operators are evaluated before lower precedence operators.
For operators on the same line, that have equal precedence, associativity decides which operator to be executed first. In Java® all operators, except the assignment operators, have left associativity.
6. Control Structures
Control structures, similar to those of C++, are used to specify the flow of control in Java® programs.
A block of statements, i.e. statements within curly braces, may appear instead of a single statement.
The following are the control structures of Java®:
- Selection control structures
if(logical test)
statement;
if(logical test)
statement;
else if(logical test)
statement;
else
statement;
switch(variable)
{
case value1:
statements
break;
case value3:
statements
break;
case value4: case value5:
statements
break;
default:
statements
}
- Repetition control structures (looping)
for(intialization ; logical test; modification)
statement;
while(logical test)
statement;
do
{
statements;
} while(logical test);
Java® provides the break and continue as branching statements. The former cause the exit from the block of statements in which it resides, while the latter causes the flow of control to be transfer to the next iteration. There are also labeled versions of break and continue in which labels are used where the control is transferred to the block with the specified label. The labeled break and the labeled continue are useful in nested loops. A return statement also is used to return form a function, passing control to the invoking function.
7. Comments
Java® supports 3 kinds of comments. The familiar C++ kinds of comments, the pair /* */ which encloses a comment and the // which indicates that the remaining of the line is a comment are supported.
In addition, Java® supports the documentation comment which is enclosed between /** and */. Comments of this kind are used for automatically generated documentation using the javadoc tool of the Java® Development Kid (JDK).
Having wrote a java file, such as the file Welcome1.java below, using javadoc someone can automatically create an html file corresponding to that java source code.
Welcome1.java:
/**
* This class can take a variable number of parameters on the command
* line. Program execution begins with the main() method.
*/
public class Welcome1
{
/**
* The main entry point for the application.
*
* @param args Array of parameters passed to the
* application via the command line.
*/
public static void main (String[] args)
{
System.out.println(“Welcome to 1.124!!!!”);
}
}
Then, the javadoc can be used to create the corresponding html file:
>javadoc Welcome1.ja
Loading source file Welcome1.java…
Constructing Javadoc information…
Building tree for all the packages and classes…
Building index for all the packages and classes…
Generating overview-tree.html…
Generating index-all.html…
Generating deprecated-list.html…
Building index for all classes…
Generating allclasses-frame.html…
Generating index.html…
Generating packages.html…
Generating Welcome1.html…
Generating serialized-form.html…
Generating package-list…
Generating help-doc.html…
Generating stylesheet.css…
8. Arrays
An array is a set of values of the same data type stored together as an entity, in a contiguous part of memory, and can be accessed using an integer index.
The declaration of an array in Java does not make any memory allocation, but simply defines a reference to an array. A new statement is required to make the proper allocation of memory. Then, an element of the array is accessed using an index within square brackets. An array in Java® has a length field which stores the number of its elements.
The class System has a function called arraycopy() that can be used to copy part or the whole array to another array.
A function in which an array is passed as an argument can change it, since the reference to that array is what is passed by value. An array can be returned from a function, i.e. the return data type of a function can be an array.
The length of an array is fixed upon its definition and cannot be modified. A class named Vector can be used to represent an array whose size can be modified.
An array can contain references to other arrays or objects, in which case memory for the individual members of the array must also be explicitly allocated using a new statement.
A multidimensional array can be defined using an array whose elements are arrays. If the array has the same number of columns then such an array is defined using a statement similar to the following: double x[][] = new double [n][m]. Otherwise for each row a new expression is used to dynamically allocate memory for it.
Example of Arrays
class introArrays
{
public static final int SIZE=5;public static void main(String args[])
{
double d[] = new double[SIZE];
int [] i ;
i = new int[SIZE];for(int j=0 ; j<d.length ; j++)
d[j] = j/2;
for(int j=0 ; j<i.length ; j++)
i[j] = j*j;for(int j=0 ; j<SIZE ; j++)
System.out.println( " d[" + j + “] = " + d[j]);
for(int j=0 ; j<i.length ; j++)
System.out.println( " i[” + j + “] = " + i[j]);int m=5, n=3;
int im, in;
double x[][] = new double[m][n];for(im=0;im<m;im++)
{
System.out.println();
for(in=0;in<n;in++)
{
System.out.print(” " + x[im][in] + " “);
}
}}
}
Output:
d[0] = 0.0
d[1] = 0.0
d[2] = 1.0
d[3] = 1.0
d[4] = 2.0
i[0] = 0
i[1] = 1
i[2] = 4
i[3] = 9
i[4] = 160.0 0.0 0.0
0.0 0.0 0.0
0.0 0.0 0.0
0.0 0.0 0.0
0.0 0.0 0.0
In Java® you can easily have a ragged array, such as a 2-D array with rows of different lengths. When an array is created only the length of the primary array must be specified. The length of the sub-arrays can be left unspecified until they are created, as it is shown in the following example.
Example of a Ragged Array
class TriangularArray
{
public static void main(String args[])
{
int i, j;double [][]x;
x = new double[5][];
for(i=0;i<5;i++)
{
x[i] = new double[i+1];System.out.println();
for(j=0;j<i+1;j++)
{
x[i][j] = (i+1)*10+j+1;
System.out.print(” " + x[i][j] + " “);
}
}
}
Output:
11.0
21.0 22.0
31.0 32.0 33.0
41.0 42.0 43.0 44.0
51.0 52.0 53.0 54.0 55.0
9. Classes and Objects
In Java® every member function and data belongs to a class and must be defined into a class declaration, i.e. it is not allowable to have global functions and variables. A class may contain data (fields) and functions (methods), which can be class or non-class members of the class.
An object is an instance of a class. It is created using a new operator which instantiates the class, allocating memory for a new object, and initializes the objects data members, usually through constructors rather than directly. The new operator returns a reference to the new instance of the class that has been created, i.e. to the new object. The new object is referenced typically by the variable at the left side of the new statement. The declaration of an object, e.g. Point p, does not allocate any memory for an instance of the class Point. It simply declares that p can be used as a reference to an instance of the class Point. In Java® memory for objects is allocated from the heap using the keyword new. If there is not enough memory to be allocated the garbage collector may run to reclaim some memory and if there is still not enough memory an OutOfMemoryError exception is thrown. The variable that is associated with an object, in contrast to a variable of primitive data type, is actually a reference to that object.
A _class member data, or stat_ic field, is a field that is shared among all objects of that class, as in C++. Similarly, static (or class) functions are methods, that can operate on class member data (static fields), or perform operations on the entire class and not on a certain instance of the class (i.e. object). A static function can access only static members, variables or functions, of the class, since it is not invoked on a specific object. Static fields and methods are declared using the keyword static at its declaration. Class variables and methods are accessible from the class itself. There is no need to create an object, i.e. to instantiate the class in order to access its class (i.e. static) members. The static variables of a class are initialized before any use of any static variable and before the use of any of the member functions of the class.
When a class is defined it is required to use the keyword class followed by the name of the class in the class declaration. The class body, in curly braces, follows the class declaration. Other possible components of a class declaration are the public, abstract, final, extends (superclass), and implements (one or more interfaces). If any, or all, of these optional components are not provided the Java® compiler considers the defaults, which are nonpublic, non-abstract, non-final subclass of the class Object that does not implement any interface.
A public class is publicly accessible, i.e. it can be used by classes in any package, not necessarily classes in its package.
An abstract class is a class that cannot be instantiated, i.e. no objects of that class can be defined. An abstract class must necessarily be subclassed to be used, since it may contain methods with no implementation, i.e. abstract methods. an abstract class may provide definitions of all or some of the methods that the subclasses may inherit. Although an abstract class cannot be instantiated, a reference to an abstract class can be defined and used to achieve polymorphism. Typically, some of the functions are left to be implemented by the subclasses. If a class contains an abstract function then the class is abstract and cannot be instantiated. In that case the class should be explicitly specified as abstract.
A final class is a class that cannot be subclassed. Specifying a class as final automatically implies that all its methods are considered to be final. Specifying a class, or a function, as final may sometimes be useful, considering security and optimization issues.
The extends <superclass> specifies that the class that is declared is a subclass of the provided <superclass>.
Finally, the implements<interface1>, <interface2>,…., specifies that the class implements one or more interfaces, whose names are provided after the keyword implements in a comma separated list.
Note that assigning a reference data type variable, i.e. a variable that refers to an instance of a class, to another reference variable then both variables refer to the same object. The function clone() may be used to actually make an actual complete copy of one object to another, i.e. copy the object state into a new identical object but in a different memory location.
The following is an example of a simple class. The class has two member data, x and y, and a class (or static) data and has no functions.
Example of a Simple Class
class Point
{
public double x, y=100; // data member (or field)
public static int num = 0; // class or static data member
}class introClasses
{
public static void main(String args[])
{
Point p1; // declaration of a Point object
Point p2 = new Point();
Point.num++;p1 = new Point();
p1.y = 1.1;
p1.x = 2.2;
p2.num ++;
System.out.println(“Number = " + Point.num);
System.out.print(”\n p1: x = " + p1.x + " y = " + p1.y );
System.out.println(”\n p2: x = " + p2.x + " y = " + p2.y );
}
}
Output:
Number = 2
p1: x = 2.2 y = 1.1
p2: x = 0.0 y = 100.0
10. Constructors
A class can provide one or more constructors to make the proper initializations of newly created objects. As in C++, a constructor has the same name with the class and has no return type. Java® supports constructor overloading. Constructors are differentiated from each other from the number and type of their parameters. The compiler, upon an object creation, invokes the constructor that matches with the provided arguments. A constructor with no argument is known as the default constructor (or no-argument constructor). If no constructors are provided, Java® provides by default the no-argument (default) constructor which does nothing. Upon the creation of an object its data fields are set to the default value of zero (for numeric types), ‘\u0000’ (for char), false (for boolean), or null (for reference) depending on the variable’s data type. Then, the initializers and initialization blocks are called to initialize the fields. Finally, the constructor is called, which first, invokes, explicitly or implicitly, its superclass constructor, and then the statements in the body of the constructor(s) are executed.
Another constructor of a class can be invoked from inside a constructor of that class using this which is a reference to the current object. This is called explicit constructor invocation. A constructor of a superclass can be invoked from inside a constructor using the super keyword followed by parentheses in which arguments may be provided. The invocation of a superclass constructor must be the first statement in the subclass constructor so as to perform first the super class initialization. Otherwise, the superclass default constructor will be invoked implicitly.
A constructor of a class can be specified as private, protected, package, and public which specifies which classes are eligible to create instances of that class. When a constructor is private no other class can instantiate the class and only if the class provides public classes to instantiate the class, an object can be created. When a constructor is specified as protected, only subclasses can use it to create objects of the class. If a constructor is specified as public any class can create an object of the class using the constructor. Finally, a package constructor can be used only by classes within the same package to create objects of the class.
Example on Constructors
class Point
{
private double x, y;Point() // default constructor
{
x = y = 0.0;
}
Point(double x, double y) // constructor overloading
{
this.x = x;
this.y = y;
}public String toString() // toString method
{
return (“x = " + x + " y = " + y);
}
}class constructorsClasses
{
public static void main(String args[])
{
Point p1= new Point(), p2;
p2 = new Point(2.22,4.8);System.out.println(”\n p1: " + p1);
System.out.println("\n p2: " + p2);
System.out.println();
}
}
Output:
p1: x = 0.0 y = 0.0
p2: x = 2.22 y = 4.8
11. Initializers
In Java® someone can use static initializers and instance initializers to provide initial values for static (i.e. class) and instance (i.e. object) data members. Static initializers cannot call functions that are declared to throw checked exceptions.
If no values are provided to the variables of a class using either initializers or constructors a zero, ‘\u0000’, false, or null value is assigned depending on the variable’s data type.
The following example demonstrates the use of static and instance initializers.
Initializers Example
class Initialization1
{
public static void main(String args[])
{
Point p = new Point();
System.out.println(“Number of points = " + Point.pointsNumber);
System.out.println("(x,y) = (” + p.x + “,” + p.y + “)”) ;
}
}class Point
{
double x=200, y=100; // instance initializer
static int pointsNumber=1; // static initializer
}
Output:
Number of points = 1
(x,y) = (200.0,100.0)
However, these initializations can be done only when using assignment statements, i.e. without the ability to call any other method, or throw an exception. For such cases a static initialization block can be used to initialize static members, while a constructor is, in general, used for the initialization of instance members. Constructors are called after the initializers have assigned the default values to the member data.
A static initialization block is a block enclosed with curly braces with the keyword static before it. The static initialization blocks and the static initializers are called in order from left to right and from top to bottom.
Static Initialization Block Example
class Point
{
double x=200, y=100; // instance initializer
static int pointsNumber;static // static initialization block
{
pointsNumber = 1;
System.out.println(“Inside static initialization block”);
}
}
class Initialization2
{public static void main(String args[])
{
System.out.println(“Inside main”);
Point p = new Point();
System.out.println(“Number of points = " + Point.pointsNumber);
System.out.println("(x,y) = " + p.x + “,” + p.y + “)”) ;
}
}
Output:
Inside main
Inside static initialization block
Number of points = 1
(x,y) = 200.0,100.0)
Java® supports also instance initialization blocks (or non-static initialization blocks) which are sometimes useful, such as in anonymous classes where having constructors is not possible. An instance initialization block is placed at the location where a new object of the class is created. The non-static initializers are executed upon the creation of an object before the invocation of the corresponding constructor. The order of execution of instance initializers and initialization blocks is again from left to right and from top to bottom.
An anonymous class is a class without a name that is defined within another class. At the same time that it is defined an instance of it is created using the new keyword. Since an anonymous class has no name it cannot have any constructors.
12. Member Data and Functions
Both member data and member functions are accessed and invoked, respectively, using an object reference followed by a dot and at the right of the dot the name of the data or function of the class. The object reference can be any expression that returns a reference to an object of the class, e.g. a new operator. In the invocation of an object’s function, parentheses, which may contain provided arguments, follow the name of the function. Invoking a function of an object is known as “sending a message” to that object.
Member data (or member variables) are declared using the data type followed by the name of the variable. The data type can be any primitive or reference data type, while the name should be any legal Java® identifier. In addition, the following attributes can also be specified:
- access level: specifies the access level for this variable, which can be public, package, protected, and private.
- static: specifies that the variable is static (i.e. class) variable
- final: specifies that the value of the variable after it is assigned cannot be modified
- transient: indicates that the variable is transient, which is not yet fully specified
- volatile: indicates that the Java compiler should not perform certain optimizations on the variable
Member functions are typically provided, as in C++, to operate on member data allowing data encapsulation, hiding data behind functions (methods). However, in Java® global functions are not allowable. Every function must be provided within a class definition. Also externally defined member functions are not possible in Java®, since everything must be defined within a class. Java® supports recursion, allowing a function to call itself either directly (direct recursion), or indirectly (indirect recursion) through another function.
A function has two parts, the function declaration and the function body. The function declaration must provide the return data type and the name of the function followed by parentheses that enclose the parameters of the function.
A function may return a value or no value in which case it is declared as void. A function that returns an object of a class can return an object of any subclass of that class as well. In addition, a function may have an interface as a return type, in which case an object of any class that implements that interface may be returned.
The function name should be any legal Java® identifier. The name of a function can be the same as the name of a data member of the class. Java® supports, as C++, function overloading allowing functions to have the same name as long as the individual functions with the same name differs in the number or/and type of the parameters. The signature of a function is its name together with the number and type of its parameters. Functions with different signatures, although with the same name, are allowable.
A function may have no or any number of arguments. A function with no arguments is defined using empty parentheses. An argument with the same name as a member variable of the class hides the member variable. In that case the reference this can be used. The latter, i.e. this, is a reference that refers to the object with which the member function has been invoked. The reference this may be used to pass a reference to the object that has invoked the member function, as an argument to the member functions. Similarly the reference super refers to the superclass of a class and can be used when a member variable or function of a superclass is hidden. In Java, it is not possible to pass a function (or a pointer to a function) as an argument to a function.
All arguments to functions in Java® are passed by value, which means that primitive data type arguments and the actual references cannot be modified. The values of the parameters are copies of the values of the arguments passed to the function. Declaring a function parameter using the final modifier prohibits the modification of that parameter within the function.
A function declaration may also provide more information about it using any of the following attributes:
- access level: specifies the access level for this variable, which can be public, package, protected, and private.
- static: specifies that the function is static (i.e. class) function, i.e. that it is not associated with a certain object of the class
- abstract: indicates that the method is not implemented. Therefore, the class is abstract and cannot be instantiated.
- final: specifies that the function cannot be overridden by a subclass
- native: indicates that the function is implemented in another language (e.g. C++)
- synchronized: indicates that certain precautions should be taken to ensure that functions that operate on same data, do it in a threat-safe way.
- _throw_s <exceptions>: specifies the checked exceptions that the function may throw
An implicit reference to the object with which a function is invoked, called this, is available in every non-class (i.e. non-static) function. It is used to explicitly refer to members of the object that have invoked the function, or when an object reference is required.
A function of a class with the name toString() that takes no arguments and returns a String is a special function. It allows the object to be used in a string concatenation using the + operator, e.g. in a println statement. Note that all the primitive data type variables are implicitly converted to String objects whenever they are used in String expressions.
In Java® there is no need to worry about explicitly destroying objects that are not needed anymore. Java® provides a garbage collector that periodically frees memory of objects that are no more being referenced. When a variable that is used to reference to an object goes out of scope or it is set to null, that object, if not referenced by any other variable becomes eligible for garbage collection.
Example on Member Data and Methods
class Point
{
private static int num = 0; // static field
private double x, y; // non-static data members (fields)/* set methods */
public void setX(double x)
{
this.x = x;
}
public void setY(double yy)
{
y = yy;
}
public static void incrNum() // static method
{
num++;
}/* get methods */
public double getX()
{
return x;
}
public double getY()
{
return y;
}
public static int getNum() // static function
{
return num;
}public String toString() // toString method
{
return (“x = " + x + " y = " + y);
}
}
class methodsClasses
{
public static void main(String args[])
{
Point p1, p2 = new Point();
Point.incrNum();p1 = new Point();
p1.incrNum();p1.setX(1.1);
p2.setX(2.2);System.out.print(”\nNumber = " + Point.getNum());
p1.setY(0.11111);
System.out.print(”\n p1: x = " + p1.getX() + " y = " + p1.getY() );
System.out.print("\n p2: " + p2 );
System.out.println();
}
}
Output:
Number = 2
p1: x = 1.1 y = 0.11111
p2: x = 2.2 y = 0.0
13. Function Overloading
Java® allows function overloading with which the selection of the function is based on its signature. The signature of a function is its name and the number and type of its parameters, i.e. the return type of a function is not part of its signature.
Example on Function Overloading
class methodsOverloading
{
public static void main(String args[])
{
myPrint();
myPrint(3);
myPrint(1.7);
}public static void myPrint()
{
System.out.println(" Inside myPrint()");
}
public static void myPrint(int i)
{
System.out.println(" Inside myPrint(): i =" + i);
}
public static void myPrint(double x)
{
System.out.println(" Inside myPrint(): x = " + x );
}
}
Output:
Inside myPrint()
Inside myPrint(): i =3
Inside myPrint(): x = 1.7
|
common_crawl_ocw.mit.edu_79
|
These notes were prepared by Petros Komodromos.
Topics
- Sun Java® Studio Standard 5
- Inheritance
- Controlling access to class members
- Strings
- Packages
- Interfaces
- Nested classes and interfaces
- Garbage Collection
- Applets
1. Sun Java Studio Standard 5
Sun Java® Studio Standard 5 is an integrated Java® development environment (IDE) that provides visual design, editing, compilation, debugging, and deployment of Java® software. It is itself written entirely in Java® and is from the Sun’s Java® web site: Sun Java® Studio Standard 5 update 1. If you prefer to use a free IDE and only require J2SE and web application development capabilities then use the open source IDE NetBeans.
2. Inheritance
A class (called subclass) can inherit the behavior of another class (called superclass). This is called inheritance and it is essential for Object Oriented Programming (OOP), allowing the extension of existing classes. A class is extended using the keyword extends following the name of the new class (i.e. the subclass) and before the name of the superclass.
e.g.: class Pixel extends Point
A subclass object can be used in any function that an object of its superclass is expected. Note, that the converse i.e. using a superclass where a subclass is expected is, in general, false. This feature enables polymorphism, another important characteristic of OOP, which allows a different function to be called depending on the actual object that is associated with the function call. Polymorphism is a Greek word which means that it allows many (poly) forms or shapes (morphes). An object in Java® can be polymorphic, i.e. have various different forms and this is possible due to the fact that a subclass can be used wherever a superclass is legal and the late binding feature of Java®. With late binding the compiler does not associate the function call with a certain function during compilation, but delays that decision until execution (i.e. running) time. In contrast, with static binding the decision of which function to be invoked is decided at compile time.
Although when calling a function the decision of which function of a class to call depends on the actual class of the object, when accessing a data field of an object the decision is according to the type of the reference to the object.
Although a subclass object can be assigned to a superclass object, i.e. to a variable that may refer to a superclass, the converse is not allowable unless an explicit casting is used. The syntax for casting between two objects is the similar to the casting from one primitive data type to another. In order to avoid illegal castings the instanceof operator may be used to verify the data type of the object prior to any casting. Casting of objects is allowable only between objects within an inheritance hierarchy. If an object has been assigned to a reference to its superclass, then to be able to use the objects (i.e. the subclass’ functions) it is required to cast it back to the subclass.
An implicit reference, named super, is available and references to members of the superclass of the class of the object that invoked a function. It is similar to the this reference.
Subclasses can add other member and class data, or member or class functions. Data fields of a subclass can hide data fields of a superclass if they are given the same name. Then, although the data fields of the superclass exist they can be accessed only indirectly. Similarly, a subclass can inherit and use, or override, the methods provided in its superclasses. Member functions of the superclass can be overridden by new implementations provided in the subclass. However, a subclass cannot override superclass functions that have been declared as final or static (i.e. class functions). It can only override accessible non-static functions. A subclass must override functions that are declared as abstract in the superclass or the subclass must be abstract itself.
A subclass function with a name the same as a superclass function may have different parameters, according to function overloading. However, a subclass function with the same signature with a superclass function must also have the same return type, in order to hide the superclass’ function. The overriding function must have the same signature (name and parameters) and the same return type with the function it overrides. Which function is called depends on the actual object and its class that is referenced by the object reference and not by the class of the reference to the object itself.
The overriding function can have a different access specifier but only if it allows more access than the one of the superclass function that overrides. It can have also different throws specified but not a throw clause that has not been declared by the superclass function or an exception type that is not a subtype of the exceptions specified in the superclass. The hidden variables and overridden functions of a superclass can be accessed using the super keyword.
Inheritance Example
class Point
{
private double x, y;Point()
{
System.out.println(" In default constructor of Point class");
x = y = 100.0;
}
Point(double x, double y)
{
System.out.println(" In constructor Point(double x, double y)");
this.x = x;
this.y = y;
}public void set(double x, double y)
{
System.out.println(" In set method of Point");
this.x = x;
this.y = y;
}public String toString()
{
System.out.print(" In toString method of Point class");
return (“x = " + x + " y = " + y);
}
}
class Pixel extends Point
{
private int color;Pixel()
{
System.out.println(” In default constructor of Pixel class");
color = 111111111;
}
Pixel(double x, double y, int color)
{
super(x,y);
System.out.println(" In Pixel(double x, double y, int color) “);
this.color = color;
}public void set(double x, double y, int color)
{
System.out.println(” In set of Pixel class calling Point’s set");
super.set(x,y);
this.color = color;
}public String toString()
{
System.out.print(" In toString method of Pixel class");
return (“color = " + color );
}
}class introInheritance
{
public static void main(String args[])
{
Pixel p1;
p1 = new Pixel();
p1.set(1.11, 0.22, 100111010);
System.out.println();Point p2 = new Point(200,25.7);
Pixel p3 = new Pixel(3.33,9.6,111000111);
Point p4 = new Pixel();
System.out.println();System.out.println(”\n p1: " + p1);
System.out.println("\n p2: " + p2);
System.out.println("\n p3: " + p3);
System.out.println("\n p4: " + p4);
System.out.println();
}
}
Output:
In default constructor of Point class
In default constructor of Pixel class
In set of Pixel class calling Point’s set
In set method of PointIn constructor Point(double x, double y)
In constructor Point(double x, double y)
In Pixel(double x, double y, int color)
In default constructor of Point class
In default constructor of Pixel classIn toString method of Pixel class
p1: color = 100111010
In toString method of Point class
p2: x = 200.0 y = 25.7
In toString method of Pixel class
p3: color = 111000111
In toString method of Pixel class
p4: color = 111111111
Java® does not allow multiple inheritance. All classes in Java® are subclasses of the class Object, which is defined in the package java.lang. Even if no inheritance is used, i.e. no extension, the class is considered to be a subclass of the Object class. Variables of class Object can be used to refer to any reference data type, object or array. The functions toString(), equals(), clone(), and finalize() are functions of the class Object that are often overridden by the subclasses. The toString() method returns a string representation of the object. The clone() method is used to make a field-by-field copy of one object to a new one which is returned by the method. The equals() method is used to compare two objects for equality according to a selected equality test. Finally, the finalize() method can be used to clean up resources that may have been allocated from the object before it is garbage collected. Another useful function is the getClass(), which, however, cannot be overridden since it is a final function. This function returns the runtime class of the object, an instance of Class, which can be then query for various information. The class Class provides various useful functions.
Object Wrappers
Java® provides object wrappers which are subclasses of the Object class that are counterparts of the primitive data types and allow the conversion of a primitive data type variable to an object.
The wrapper classes which are the Boolean, Character, Byte, Double, Float, Integer, Long, Short and Void, are final classes that are useful when doing generic programming. The wrapper classes provide several useful functions.
3. Controlling Access to Class Members
Java® provides, as C++ does, an access control mechanism which restricts the access to member data and functions of a class. In Java® the access level is specified for each individual member of the class. There are 3 different access specifiers which may be used to specify one of the 4 access levels. The access specifiers (or access control modifiers) are the private, protected, public and package which are specified as follows:
A private member of a class is accessible only within the class itself. Instances of the same class have access to each other’s private members. The keyword private is used to specify a private member.
A protected member of a class allows access for and can be inherited by only the class itself, its subclasses, and classes defined in the same package. A protected member (data or function) can be accessed using a reference variable that can be either a reference to the class itself or to one of its subclasses. Protected static data and functions of a class can be accessed in any of its subclasses as well as by any class in the same package. The keyword protected is used to specify a protected member.
A package member is accessible from and is inheritable by all classes in the same package as the class. This is the default access level used when no access specifier is used.
Finally, a public member is accessible from all classes and is inherited by any subclass. It is specified using the keyword public.
Therefore, a subclass inherits the members of its superclass that are protected and public, as well as the members without access specifier, i.e. the package members, as long as the subclass is in the same package with the superclass. However, subclass member variables hide the superclass members variables with the same name, and subclass member functions override the corresponding ones in the superclass. The keyword super can be used to refer to hidden variables and overridden functions of the superclass.
4. Strings
The String class can be used to deal with strings, i.e. a sequence of characters. This class provides several functionalities that can be used on strings. It provides a concatenation operator “+”, and a shorthand operator “+=”. In addition, it provides a length() method to obtain the number of the characters in the string. A method, named equals() is also available to compare two strings whether they have the same contents.
When concatenating any primitive data type value with a string it is always converted to a string. For objects a special function named toString() which is inherited from the Object class can be overridden to provide this facility.
String objects are only readable. Their contents cannot be modified, i.e. they are immutable. When a concatenation, or assignment is done, what actually happened is that a new String object is created and referenced. Java® provides another class, named StringBuffer which can be used to store and modify a set of characters.
Example with Strings
class introStrings
{
public static void main(String args[])
{
String course = new String();
double number = 1.124;
String title[] = new String[3];title[0] = “Computer” + “-”;
title[1] = “Aided”;
title[2] = " Engineering";course = number + “: “;
for(int i=0 ; i<title.length ; i++)
course += title[i];
System.out.println(course);
}
}
Output:
1.124: Computer-Aided Engineering
5. Packages
A package essentially defines a certain namespace, allowing functions in different packages to use same names. A package is used to group together a collection of related classes and interfaces.
Every class in Java® belongs to a package, either the one specified at the top in a package statement or the default package. Everything in the java.lang package is by default imported in any Java® program, and therefore, it does not need to be imported.
A package is created by using a package statement in the java® source code file. The statement is the keyword package followed by the name of the package. Then, everything defined after that point is considered to belong to that package. When a Java® source code file is compiled the resulting class file(s) is (are) created in the directory specified by the package statement. An extra option to the javac compiler (javac -d <directory> <myProgram.java>) can be used to specify where to create all subdirectories according to the package statement. To enable the use of any new packages it may be necessary to properly setup the CLASSPATH environment variable. The latter variable indicates where to search for user-developed packages.
The classes defined in a package can be used in a program using an import statement. This statement consists of the keyword import followed by the name of the package to be inserted. The import statement specifies to the Java® compiler the location of the classes enabling the use of shorter names for each imported class. No import statements are required if the class files that are used in a Java® program are in the same directory that the class that uses them is located.
A class for which a package name has not been specified belongs to the default package. The default package is without a name and it is always imported by default.
The Core Java® API provides a collection of packages that can be used.
6 . Interfaces
Interfaces are similar to classes, but they contain only declarations of methods. An interface declares sets of constants and functions, without providing implementations for any of them. They are used to declare methods that should be supported by a class without, however, to actually implement these functions. In particular, interfaces may contain public abstract methods and public final static data.
Any class that implements an interface must provide the actual implementation for any functions that have been declared in the interface. Such a class is said to implement the interface. If even one function is left unimplemented the class is considered to be abstract.
A class is specified to implement an interface using the keyword implements. A class can implement more than one interfaces, although it cannot extent more than one class. However, a class that implements an interface must provide implementations for all of the functions of the interface.
In addition interfaces can be extended using the keyword extends.
Example with Interfaces
interface Geometry
{
public void print();
}class Point implements Geometry
{
private double x, y;Point()
{
x = y = 100.0;
}
Point(double x, double y)
{
this.x = x;
this.y = y;
}public void print()
{
System.out.print(” x = " + x + " y = " + y );
}
}
class introInterfaces
{
public static void main(String args[])
{
Point p1;
p1 = new Point();
Point p2 = new Point(2.2, 0.44);System.out.print(”\n p1: “);
p1.print();
System.out.print(”\n p2: “);
p2.print();
System.out.println();
}
}
Output:
p1: x = 100.0 y = 100.0
p2: x = 2.2 y = 0.44
7. Nested Classes and Nested Interfaces
A nested class is a class defined within another class. Similarly Java® allows nested interfaces, i.e. interfaces which are members of another interface.
The simplest nested class is a static nested class which is a class with a name provided inside another class with the specifier static used at its declaration. Since it is a member of the class the access level can be set using the access specifiers. Outside the class someone refers to the nested class by the name of the class in which it is nesting followed by a dot and the name of the nested class.
A non-static nested class is called an inner class and it is associated with a particular instance of the class rather than the entire class. When an object of an inner class is created an enclosing object, of the “outer” class is, in general, associated with that object. Inner classes cannot have static members. A local inner class is an inner class that is not a member of a class but it is defined in a function of a class and, therefore, it is local to the function.
A nested class, non-static nested or inner class, can use any member of its enclosing class without the need for qualification and with access to all members, including the private ones. However, a static nested class can access only the static members of the enclosing class, since it is not associated with any particular object.
In some cases, e.g. handling AWT events, there is no reason for naming an inner class. In those cases, an anonymous class is preferable to be used if the class is only for a single use. An anonymous class is an inner class without a name that is defined at the point where it is needed rather somewhere within the class with a name.
Nested interfaces can only be static since an interface cannot have an implementation.
8. Garbage Collection
Java® provides a garbage collector which takes care of dynamic memory deallocation. As soon as Java® objects become unreferrenced the corresponding memory may automatically reclaim it. The garbage collector periodically destroys any unused object in dynamic memory, typically only when running out of memory. In Java® there is no need of explicit dynamic memory release and, therefore, the danger of memory leak is reduced.
A mark-sweep algorithm is used by the garbage collector. First, the dynamic memory is scanned for referenced objects and then all remaining objects are treated as garbage. Any object that is not referenced in any way is eligible for garbage collection. An object that is only referenced by objects that are also unreferrenced is considered unreferrenced and is eligible for garbage collection. An object is considered unreferrenced when any references that used to refer to it have changed, referring to another object or to null, or if there has been a return from a function in which the local references used to refer to the object. However, a memory leak is still possible when references are kept to objects that are no longer needed.
Prior to the actual memory deallocation of memory for an object, the object’s finalizer, i.e. a function named finalize, is invoked. This process know as finalization, allows the object to perform a cleanup of any associated system resources. It is typically used to reclaim external resources that have been allocated, e.g. to make sure that files that have been opened are properly closed. The finalize() function is inherited from the Object class and can be overridden. It is allowable to use try-catch in a finalize function to handle exceptions that may result from function calls within its body. In general, the last thing that a finalize() method should do is to call super.finalize() to give the superclass the chance to finalize itself, since the superclass finalize is overridden.
The garbage collection may be performed at any time and in any order according to efficiency considerations of the garbage collector. However, it is possible to explicitly request finalization and garbage collection using the System.runFinalization() and System.gc() commands, respectively.
9. Applets
An applet inherits functionalities that allow it to run in a Java®-enabled browser. Although applets do not need to implement a main method, every applet has to implement at least one of the init, start, or paint methods that inherits from its superclass. For AWT, the class Applet that is provided in the package java.applet of the Application Programming Interface (API) is inherited by any applet. For Swing applets the class JApplet is inherited by any applet. Class JApplet extends the AWT Applet class and implements the Accessible and RootPaneContainer interfaces.
A Java® applet is based on a set of conventions and functionalities that are inherited allowing it to be executed in an appletviewer or any Java® enabled browser. An html file needs to be used so as to load the class from a Java® enabled browser, or using the appletviewer provided with the Java® Development Kit (JDK). The class file of an applet can be loaded and interpreted in any Java® enabled browser, or the appletviewer, using an html file. The html code is used to specify at least the location and the dimensions of the applet to be loaded. When a Java®-enabled browser, or the appletviewer, encounters an <APPLET> tag, it reserves a display area according to the specified width and height for the applet, loads the bytecodes for the specified subclass of Applet, then, creates an instance of that subclass. Finally, it calls the applets init() and start() methods. The execution of the applet can be customized using the options that the <APPLET> tag provides. When the applet needs to use a class, the browser, first, tries to find the class on the host that’s running the browser, and if it cannot find, it searches for it in the same place from where the Applet subclass bytecode came from in order to create that applet’s instance.
An AWT applet inherits, since it extends it, the Applet class provided by the java.applet package of the Java® Core API. The Applet class extends the AWT (Abstarct Window Toolkit) Panel class, which itself extends the Container class (which provides the ability to include other components and using a layout manager to control the size and position of those components). The latter extends the Component class (which provides the drawing and handling events capabilities). Swing applets extend the JApplet class, which is a subclass of the AWT Applet class and implements the Accessible and RootPaneContainer interfaces.
As soon as an applet (i.e. an instance of an Applet subclass) is loaded, that instance of the Applet subclass is created, the applet initializes itself (i.e. method init() is called), and starts running (i.e. method start() is invoked). If the user leaves from the page with an applet, or icognify the window that contains an applet, the applet has the chance to stop itself. In case of returning to that page, or opening the icognified window, the applet can start itself again. Upon quitting a browser that shows an applet, or in general unloading an applet, the applet has the chance to stop itself (i.e. call the method stop()) and do final cleanup (i.e. call the method destroy()).
It is preferable to put the code for initialization of an object of an Applet subclass in the init() method instead of using a constructor, which, in general, should be avoided. The code to be executed by an applet after its initialization should be put in the start() method. The code to stop that execution should be provided in the stop() method.
The main display method of an applet is the_paint()_ method which is inherited by the Panel class and can be overridden. This method is invoked when the applet needs to draw itself to the screen. Method update() can also be overridden to improve the quality and performance of the drawing.
Any Applet subclass must provide at least one of the init(), start(), and paint() methods.
In general, due to security considerations, an applet cannot read and write files, or start any program on the host that’s executing it. It, also, cannot make network connections except to the host from where it was loaded and it cannot read certain system properties. On the other hand, applets have some additional capabilities that are not available to applications. The Applet API enables an applet to load data files specified relative to the URL of the applet or the page in which the applet is running, force the browser to display a document, find and communicate with other applets running in the same page, play sounds, etc.
Since the Applet class is a subclass of the Panel class, which provides drawing and event-handling capabilities, it is easy to create a GUI (graphical interface), or, in general, to use graphical compontents. An applet does not have to create a window to display itself in, since it can display itself within the browser window.
An applet can have its own threads allowing it to create and use its own threads to perform time-consuming tasks. In most cases, a browser, in which an applet is loaded, allocates a thread, or a thread group, for the applet. The drawing methods of an applet (i.e. paint() and update()) are called from the AWT drawing thread.
When implementing Swing applets, the JApplet class should be used instead of the Applet class in order to avoid problems due to mixing of AWT and Swing components. Similarly, when implementing Swing applications the JFrame should be used instead of the Frame class.
|
common_crawl_ocw.mit.edu_80
|
These notes were prepared by Petros Komodromos.
Topics
1. Exceptions
Java® provides ways to manage exceptions, i.e. errors that may occur during the execution of a program that disrupt the normal flow of instructions. When an unexpected error condition happens during runtime a Java program “throws” an exception. This can avoid the premature termination of the program, which in some cases is not necessary. A function that detects an error throws an exception, which can be caught by an exception handler. The exception object contains information about the exception, e.g. its type and the state of the program when the error had occurred.
The runtime system tries to find some code to handle the error, i.e. an exception handler. The exception causes a search through the call stack to find an appropriate handler to handle the exception. The Java® runtime system searches backwards through the call stack to find any methods that are interested in handling a particular exception. The corresponding, or the default, exception handler catches the exception and may throw a new exception or attempt to handle the exception. If the runtime system, after an exhaustive search of all the methods on the call stack, cannot find an encompassing exception handler the exception is caught by a default exception handler. The latter, usually, prints information about the thrown exception, e.g. where it was thrown, and, then, the runtime system, and, consequently, the Java® program, terminate. An exception handler can catch an exception based on its group or general type by specifying any of the exception’s superclasses in the catch statement.
An exception in Java® is an object which is a subclass of the class Throwable, and, in most cases, it is derived by the class Exception, which is a subclass of Throwable. The Throwable class is the superclass of all errors and exceptions in the Java® language. Only objects that are instances of this class, or of one of its subclasses, can be thrown by the Java® VM, or by a Java® throw statement.
In Java®, exceptions are managed by first trying something in a try{….} compound block. The code that might throw an exception is placed in a try block. If an error occurs in that block an exception is thrown. Then, a catch{…..} block is used to catch any exception that is thrown. Zero or more catch blocks may follow the try block. The code that may provide the action that must be taken in the case of a certain exception occurs is provided in a catch block that follows the try block. Each catch block specifies the type of exception it can catch. Only the Throwable class or one of its subclasses can be the argument type in a catch clause. An optional finally {….} block, which follows the last catch block or the try block when there are no catch blocks, provides code that is executed in any case, i.e. regardless whether an exception occurs or not.
Example
class Exceptions1
{
public static void main(String args[])
{
double d[] = new double[4];
for(int j=0 ; j<d.length ; j++)
d[j] = j*7.15 + 2.19 ;
for(int j=0 ; j<=d.length ; j++)
{
try{
System.out.print( " d[" + j + “] / " + j + " = “);
if(j==0)
& #160; throw new DivideByZeroException();
System.out.println(d[j]/j);
}
catch(ArrayIndexOutOfBoundsException e)
{
System.out.println(“Exception: " + e.getMessage());
System.out.print(“e.printStackTrace(): “);
e.printStackTrace();
}
catch(DivideByZeroException e)
{
System.out.println(“Exception: " + e.getMessage());
}
finally
{
System.out.println(“Inside finally()\n”);
}
}
System.out.println( “\n Program exiting \n”);
}
}
class DivideByZeroException extends ArithmeticException
{
DivideByZeroException()
{
super(“Trying to divide by zero”);
}
}
Output:
d[0] / 0 = Exception: Trying to divide by zero
Inside finally()
d[1] / 1 = 9.34
Inside finally()
d[2] / 2 = 8.245000000000001
Inside finally()
d[3] / 3 = 7.880000000000002
Inside finally()
d[4] / 4 = Exception: null
e.printStackTrace(): java.lang.ArrayIndexOutOfBoundsException
at Exceptions1.main(Compiled Code)
Inside finally()
Program exiting
A checked exception requires the specification of the way that the exception should be handled. If a function may result in a checked exception, this must be defined using the keyword throws at the definition of the class, after its name. A method can only throw checked exceptions that have been specified in its declaration. Java® requires that a method should either catch, or specify all checked exceptions that can be thrown within the scope of that method.
Unchecked exceptions are of type RuntimeException, Error, or subclasses of them, and can be thrown anywhere without the need to specify the possibility of such exceptions to be thrown. Classes Exception and Error are subclasses of the class Throwable_,_ as shown in the following figure, which was adapted from Sun’s Java® Tutorial. Class RuntimeException is a subclass of the class Exception.
<section data-uuid=“f494f101-342f-990d-5567-57018b331777”></section>
Any method (i.e. function) that may produce a non-RuntimeException should declare the type of exception that it can produce using the throws keyword, or that exception should be caught in a try/catch block in the method. The basic error type is class Exception, although there are more specific types of exceptions. When a checked exception occurs, the throw keyword may used to actually create the Exception object and either an exception handler should catch and handle the exception, or the function exits. Java® uses the “termination model of exception”, since control cannot return to the throw point, when an exception is thrown.
Code that may produce an exception is placed within the try block of a try-catch statement. If the code within the try block fails, the code within a corresponding catch block, if exists, is executed. When an exception is thrown the control goes out of the try block and searches the catch blocks to find an appropriate handler. If the code succeeds, then control passes to the next statement following the try-catch statement or to the finally block. In the presence of a finally block, the latter is also executed irrespectively of whether an exception has occurred. If no exception handler is found for an exception, the search continues in the enclosing try block and a Java® application terminates if no exception handler is eventually found.
Class Error is, also, provided in Java® to be used for serious problems that should not be caught. The Error class is a subclass of the Throwable class. RuntimeExceptions should be dealt directly rather than throw an exception.
The main advantages of using exception handling are that regular code is separated from error handling code making the code much more readable. Also, error types can be grouped together, and the errors can propagate up the call stack to find a proper exception handler.
However, use of exception handling for purposes other than error handling should be avoided since it can cause a performance overhead. Exception handling should be used only when a method is unable to detect and process an error and, thus, it throws an exception under an exceptional situation, instead of processing it locally. You should avoid using exception-handling for purposes other than error handling. The latter reduces clarity but also results in an execution time overhead imposed by the exception handling code. However, when an exception does not occur there is almost no execution overhead.
An exception handler can do several things such as try to recover and resume execution, rethrow an exception, throw a different type of exception. As soon as the exception handler begins execution the try block has expired, and control is in a different scope. If an exception occurs in a handler, it may be processed only by code outside the try block in which the original exception had been thrown. Similarly, a rethrown exception can be caught only by an exception handler after the next enclosing try block.
A subclass class overridden method can have in its throws list only the same with, or a subset of, its superclass’s method throws list.
2. Threads
A threat is a single sequence of steps executed one at a time, i.e. a sequential flow of control, running within a program and taking advantage of the resources allocated for that program and its environment.
Multithreading allows to a program to perform several tasks, i.e. to use more than one flow of control, concurrently. All threads of a multithreading Java® program share the same data and system resources, running at the same time and performing different tasks. Since most computers have only one CPU, threads share the CPU with other threads.
Threats can be used by one of the following ways:
By providing a subclass of the Thread class
By providing a class that implements the Runnable interface.
1. Providing a subclass of the Thread class:
We need to provide a subclass of the Thread class. This subclass should override the run() method of class Thread. The latter must contain all code that is to be executed within the thread. An instance of the subclass can then be allocated and started.
class MyThreadClass extends Thread
{
public void run()
{
// Code to be executed within the thread
}
}
Then, we can create a new thread by instantiating a subclass of the Thread class. Then, we can configure the thread, e.g. by setting its initial priority, name, etc. Finally, we can run it, by calling the start() method that is inherited from the class Thread, which invokes the new thread’s run() method.
MyThreadClass x = new MyThreadClass();
x.start();
However, using this approach is not possible when we need to extend some other class, since Java® does not allow multiple inheritance.
Example
class MyThreadClass extends Thread
{
String name;
MyThreadClass(String str)
{
name = str;
}
public void run()
{
for(int i=0; i<5;i++)
{
System.out.println(“Thread: " + name);
try
{
Thread.sleep((int) (Math.random()*1000));
}
catch(InterruptedException e)
{
System.out.println(“Catch: " + getName());
}
}
}
}
class ThreadExample1
{
public static void main(String args[])
{
MyThreadClass th1 = new MyThreadClass(“Thread-1”);
MyThreadClass th2 = new MyThreadClass(“Thread-2”);
MyThreadClass th3 = new MyThreadClass(“Thread-3”);
th3.start();
th1.start();
th2.start();
}
}
*Output: * > java ThreadExample1
Thread: Thread-3
Thread: Thread-1
Thread: Thread-2
Thread: Thread-1
Thread: Thread-1
Thread: Thread-3
Thread: Thread-1
Thread: Thread-2
Thread: Thread-3
Thread: Thread-1
Thread: Thread-3
Thread: Thread-2
Thread: Thread-3
Thread: Thread-2
Thread: Thread-2
2. Implementing the Runnable Interface
The other way to create a thread is to declare a class that implements the Runnable interface. The Runnable interface requires the implementation of the run() method, in which all code that is to be executed within the thread should be placed. An instance of the class can then be allocated. Finally, a Thread object can be created passing as an argument the object of the class that implements the Runnable interface, and, then start the thread.
class MyRunnableClass implements Runnable
{
public void run()
{
// Code to be executed within the thread
}
}
A new thread can by created by creating a Thread object using an object of type MyRunnableClass, and, then, by calling the start() method, of the Threat class, which runs the thread.
MyRunnableClass x = new MyRunnableClass();
Thread t = new Thread(x);
t.start();
Example
class MyRunnableClass implements Runnable
{
String name;
MyRunnableClass(String str)
{
name = str;
}
public void run()
{
for(int i=0; i<5;i++)
{
System.out.println(“Thread: " + name);
try
{
Thread.sleep((int) (Math.random()*1000));
}
catch(InterruptedException e)
{
System.out.println(“InterruptedException in " + name + " thread.”);
}
}
}
}
class ThreadExample2
{
public static void main(String args[])
{
MyRunnableClass r1 = new MyRunnableClass(“Thread-1”);
MyRunnableClass r2 = new MyRunnableClass(“Thread-2”);
MyRunnableClass r3 = new MyRunnableClass(“Thread-3”);
Thread th1 = new Thread(r1);
Thread th2 = new Thread(r2);
Thread th3 = new Thread(r3);
th3.start();
th1.start();
th2.start();
}
}
Output: > java ThreadExample2
Thread: Thread-3
Thread: Thread-1
Thread: Thread-2
Thread: Thread-2
Thread: Thread-2
Thread: Thread-1
Thread: Thread-3
Thread: Thread-2
Thread: Thread-3
Thread: Thread-2
Thread: Thread-1
Thread: Thread-1
Thread: Thread-3
Thread: Thread-3
Thread: Thread-1
States of a thread during its lifetime:
New: A new thread is one that has been created, i.e. using the new operator, but has not yet been started.
Runnable (ready state): A thread becomes runnable once its start() method has been invoked, which means that the code in the run() method can execute whenever the thread receives CPU time from the operating system. A threat that had been set to sleep and the time interval has expired, a thread which was waiting and has been notified, a thread that had been suspended and it has been resumed, and a thread that had been blocked by an I/O request and the I/O has been completed are also in a runnable state.
Not Runnable (blocked state): A thread can become not runnable if: the thread’s sleep() method is invoked (in which case, the thread remains blocked for a specified number of milliseconds, giving a chance to lower-priority threads to run); the thread’s suspend() method is invoked (in which case, the thread remains blocked until its resume() method is invoked); the thread calls the wait() method of an object (in which case, the thread remains blocked until either the object’s notify() method or its notifyAll() method is called from another thread); the thread has blocked on an I/O operation (in which case the thread remains blocked until the I/O operation has been completed)
Dead: A thread can die either because the run() method has finished executing, i.e. has terminated, or because the thread’s stop() method has been called. The latter should be avoided because it throws a ThreadDeath which is a subclass of Error.
The following diagram shows the possible life cycle of a thread:
Note that the methods stop(), suspend(), and resume() have been deprecated, and, therefore, should be avoided to manage threads. Instead of using the stop() method, someone can modify some variable to indicate that the target thread should stop running. The target thread should check this variable regularly, and return from its run method as soon as this variable indicates that it is to stop running. A variable indicating the desired state of the thread can be used to specify whether it should be active or suspended, in order to avoid the suspend(), and resume() methods. When the desired state is suspended, the thread can wait using the wait() method. When the thread is resumed, the target thread is notified using the notify() method.
The wait() method makes a thread wait until some condition is satisfied. Actually wait() not only pauses the corresponding tread but also releases the lock on the object, which allows other threads to invoke synchronized code on that object. The notification methods notify() and notifyAll() can be used to inform waiting threads that there was some change that might have caused the satisfaction of that condition. Almost always the notifyAll() method is used to wake up all waiting threads, while notify() picks one of the waiting threads and wakes it up. Notifications affect only threads that have been waiting ahead of the notification, i.e. the wait() has been executed before the occurrence of the notification. The wait and notify methods can be invoked only from synchronized code, either directly or indirectly (through another method called from the synchronized code).
Typically wait() is used as follows. Note that the condition test should always be in a loop.
synchronized returnType functionName()
{
while(!condition)
wait();
statements to be executed when the condition is true
notifyAll();
}
The normal and cleanest way to end a thread’s life is having the run() method return.
Method interrupt() can be used to interrupt a thread execution.
Every thread has a priority, which is a number between Thread.MIN_Priority and Thread.MAX_Priority. Threads with higher priority are executed in preference to threads with lower priority. When code running in some thread creates a new Thread object, the new thread has its priority initially set equal to the priority of the creating thread.
When multiple threads are runnable (i.e. ready to be executed), the runtime system chooses the runnable thread with the highest priority for execution. Only when that thread stops, yields, or becomes not runnable for some reason a lower priority thread will start executing.
Java® platforms that support time-slicing allow each thread of equal priority to run by providing to all threads a limited amount of the processor’s time. On Java® platforms that do not support time-slicing a thread of a given priority runs to completion or until a higher (but not equal) priority thread becomes runnable. Method yield(), which is useful only on non-time-slicing systems, allows threads of the same priority to run.
Execution of multiple threads in some order is called scheduling. This is supported by a very simple, deterministic scheduling algorithm (fixed priority scheduling) of the Java® runtime. This algorithm schedules threads based on their priority relative to other runnable threads. The Java® scheduler keeps the highest priority thread running at all times, and when time-slicing is supported allows equal high-priority threads to execute by giving slices of the processor time to them in sequence. In general, it is good practice to have the continuously running part of a program, or threads that do frequent and continual updates set to MIN_PRIORITY to avoid blocking other threads.
Threads on multiprocessor machines utilize the multiple processors. A number of the highest-priority threads equal to the number of the available processors execute.
Someone can set and get the name of a threat using the functions setName() and getName(), respectively, of the Threat class. The priority can be set and got using the methods set and get methods of the Thread class.
The thread object of the currently running thread can be obtained using the currentThread() method of the Thread class.
Example
class MyRunnableClass implements Runnable
{
String name;
MyRunnableClass(String str)
{
name = str;
}
public void run()
{
for(int i=0; i<5;i++)
{
System.out.println(“Thread: " + name);
try
{
Thread.sleep((int) (Math.random()*3));
}
catch(InterruptedException e)
{
System.out.println(“InterruptedException in " + name + " thread.”);
}
}
}
}
class ThreadExample3
{
public static void main(String args[])
{
MyRunnableClass r1 = new MyRunnableClass(“Thread-1”);
MyRunnableClass r2 = new MyRunnableClass(“Thread-2”);
MyRunnableClass r3 = new MyRunnableClass(“Thread-3”);
Thread th1 = new Thread(r1);
Thread th2 = new Thread(r2);
Thread th3 = new Thread(r3);
th2.setPriority(10);
th3.setPriority(1);
System.out.println(” th1 priority: " + th1.getPriority());
System.out.println(” th2 priority: " + th2.getPriority());
System.out.println(” th3 priority: " + th3.getPriority() + “\n”);
th3.start();
th1.start();
th2.start();
}
}
Output: > java ThreadExample3
th1 priority: 5
th2 priority: 10
th3 priority: 1Thread: Thread-2
Thread: Thread-2
Thread: Thread-2
Thread: Thread-2
Thread: Thread-2
Thread: Thread-1
Thread: Thread-1
Thread: Thread-1
Thread: Thread-3
Thread: Thread-1
Thread: Thread-1
Thread: Thread-3
Thread: Thread-3
Thread: Thread-3
Thread: Thread-3
Java®’s garbage collector runs as a low-priority thread when processor time is available and when there are no higher-priority runnable threads.
Each thread may or may not be marked as a daemon. A daemon thread is a thread that runs in the background (when the processor is available) for the benefit of other threads. An example of a daemon thread is the garbage collector. When code running in some thread creates a new Thread object, the new thread is a daemon thread if and only if the creating thread is a daemon. Daemon threads do not prevent a program from terminating, since the program exits when only daemon threads remain in it. The Java® VM exits when the only threads running are all daemon threads. The function setDaemon(boolean on) can be used, before the thread is started, to mark a thread as a daemon thread or a user thread. If the argument to the setDaemon(boolean on) is true, the thread is marked as a daemon thread.
A s_elfish thread_ is a thread that never voluntarily gives up the control of the CPU, continuing running until either the thread’s run() method terminates, or until the thread is preempted by a higher priority thread. Time-slicing systems do not allow to a selfish thread to keep the control when there are other multiple runnable threads of equal priority, allowing the other threads to run.
In particular, someone should never rely on time-sharing and should write well behaved threads that periodically give up, voluntarily, the control of the CPU, giving to other threads an opportunity to run.
Race hazard, or race condition, is the situation in which two or more threads can modify the same data in a way that the data can be corrupted by inconsistent changes.
Starvation is an indefinite postponement of the execution of lower-priority threads due to higher-priority threads which do not give them the chance to run. Starvation occurs when one (or more threads) in a program cannot progress because it is blocked from gaining access to a certain resource.
Deadlock is the worst case of starvation, which occurs when two or more threads are waiting on a condition that cannot be satisfied, e.g. when two (or more) threads are each waiting for the action of the other(s). Race conditions can arise from multiple, asynchronously executing threads that are trying to access a single object (data) at the same time which may result on a wrong result and inconsistencies. Synchronization can be used to avoid such race problems.
When several concurrent threads are competing for resources, starvation and deadlock should be prevented. Each thread should get enough access to limited resource in order to reasonably progress without causing problems to the other threads or corrupting common data.
Synchronization
In some cases separate, concurrently running threads share data in a way that they must take into account the state of other threads. An example of such a case is the so-called producer/consumer scenario in which the “producer” thread generates data that are consumed by a “consumer” thread.
In such cases, problems can be avoided using synchronization, which allows a thread to lock an object and in case of another thread trying to invoke a synchronized method on the same object, the second thread will be blocked until the object is unlocked from the first thread. A lock is associated with every object that has synchronized code.
Critical code sections are code segments (e.g. a method) within a program that access the same data (e.g. object) from separate but concurrent threads. In the Java® language, the synchronized keyword is used to identify a critical section. Whenever a synchronized method of a class is invoked, the associated object is locked by that method. A synchronized method cannot be called on the same object until the object is unlocked, i.e. another thread cannot invoke a synchronized method on that object until the lock is released. When a synchronized method is invoked on an object it obtains its lock, not allowing any other thread to invoke any synchronized method until it releases the lock. A thread can voluntarily call wait(), which releases the lock and the processor, and wait in a queue while other threads are able to obtain the lock and invoke synchronized code. The methods notify() and notifyAll() can be used to signal to waiting threads to become ready again and attempt to obtain the lock. If waiting threads are not notified the threads will wait forever causing a deadlock.
Java® locks are reentrant, i.e. it is allowed to a thread to re-acquire a lock that it already holds. This avoids problems in which arise when a thread calls from a synchronized method of an object another synchronized method of the same object, which could lead into deadlock.
Static methods can also be synchronized. In that case a “class lock” for the corresponding class is used, which does not allow two threads to execute synchronized static methods on the same class at the same time.
A synchronized statement is an alternative, in some cases, way to execute synchronized code that locks the object so that no synchronized method can be invoked on that object, since it is locked.
synchronized(objectTobeLocked)
{
statements
}
Thread Group
In Java® every Thread object is a member of a thread group, which allows to handle multiple threads, which belong in the same thread group, as a group. The ThreadGroup class is used to implement thread groups and allow to deal with the threads in a thread group as a group.
The Java® runtime system creates a ThreadGroup by default, named main, as soon as a Java® application starts. A newly created thread is placed in the same group with the thread group in which the thread that created it belongs. Therefore, unless specified otherwise, all new threads that are created become members of the main thread group by default. In an applet a newly created thread may belong to some other than main thread group depending on the browser. A thread cannot change thread group after its creation.
A ThreadGroup can contain any number of threads, which usually, however, are related in some way, and even other ThreadGroup objects. The top-most ThreadGroup in a Java® application is the ThreadGroup named main, which is the default thread group.
Method activeCount() gives the number of active threads in a thread group plus those in all its child thread groups. The ThreadGroup class provides a variety of methods to manage, and operate ThreadGroup objects and the Thread objects that are members of those thread groups.
3. I/O
The System class, which is a final class and has private constructors (i.e. not allowing its extension), provides several useful class variables and methods. For example System.out is a PrintStream object that implements the standard output stream, and System.getProperty(String key) is a static method that returns the system property indicated by the specified key.
The System class provides:
- in: the standard input stream to read input data (static InputStream object)
- out: the standard output stream to output results (static PrintStream object)
- err: the standard error stream to display error messages (static PrintStream object)
Both standard output and error, which are derived from the PrintStream class, can invoke one of the PrintStream’s methods print(), println(), and write() to print text to the stream. The latter method, which is not that common is used to write non-ASCII data, i.e. to write bytes to the stream.
The java.io package provides a collection of stream classes that support reading from and writing to an external destination. The provided classes operate on either characters or byte data types.
The abstract superclasses for character streams in java.io are the Reader (abstract class for reading character streams) and the Writer (abstract class for writing to character streams) classes, which partially provide the functionality for the characters reader and writer stream classes, respectively. [The following inserted figures have been adapted from the on-line Java® Tutorial of SUN]
In general, readers and writers should be preferred to read and write information since they can handle any character in the Unicode character set because they use 16-bits per character.
Similarly, the abstract superclasses for byte streams in java.io are the InputStream (abstract class for reading byte streams) and the OutputStream (abstract class for writing to byte streams) classes.
Reader and InputStream provide methods for reading characters and bytes, respectively. They also provide methods for marking a location in the stream, skipping input, and resetting the current position. Similarly, Writer and OutputStream provide methods for writing characters and bytes, respectively.
All readers, input streams, writers, and output streams are automatically opened upon the creation of the corresponding objects. Although they are eligible to be closed by the garbage collector, as soon as they are not referenced in any way, they can be explicitly closed by calling their close() method.
The above stream classes can be categorized into two groups: one that deals with reading and writing data, and another that performs some kind of operation on the data while reading and writing. The former are the ones shown with gray color in the above figures, while the latter are the ones shown in white.
A useful class provided by the java.io package is the StreamTokenizer class. A StreamTokenizer object can be created using as argument to its constructor call an InputStreamReader object. Then, the StreamTokenizer object may be used on an input stream to parses it into tokens, which it reads one at a time.
4. Introduction to Java® GUI and Swing
Java® Foundation Classes(JFC) provide features to facilitate the development of Graphical User Interfaces (GUI_s). Among other, JFC provides Swing*_,* which includes several components that can be used for the development of GUIs, support for a choice on the look and feel that a program uses, and provides Java®2D for high quality 2D graphics. Finally, swing allows the specification which look and feel a program should use. This is done with the setLookAndFeel() method of the UIManager class. Swing is built on top of the AWT (Abstract Window Toolkit) using the AWT infrastructure. However, Swing provides its own graphical user interface (GUI) components, many of which have a close relation or correspondence to the AWT components. Essentially, Swing is an extension and improvement to the AWT.
JFC 1.1 (with Swing API 1.1) is built into JDK 1.2_._ The latest JFC development is the JFC/Swing portion of JDK 1.3, which is code-named Kestrel and it was released this year.
The Swing API is provided in the following Swing packages:
- javax.swing (the main swing package)
- javax.swing.border
- javax.swing.colorchooser
- javax.swing.event
- javax.swing.filechooser
- javax.swing.plaf
- javax.swing.plaf.basic
- javax.swing.plaf.metal
- javax.swing.plaf.multi
- javax.swing.table
- javax.swing.text
- javax.swing.text.html
- javax.swing.text.html.parser
- javax.swing.text.rtf
- javax.swing.tree
- javax.swing.undo
The AWT (Abstract Window Toolkit) was first used to develop GUI’s in Java®, and provided the foundation on which the JFC and swing were built. The main difference between AWT and Swing is that the components of the latter are implemented without any native code. Because of this, swing components are called lightweight, while AWT components, which use native code, are called heavyweight components. Lightweight components are drawn entirely using Java®, while heavyweight components use native peers. Actually, AWT 1.1 also introduced some lightweight components, while earlier versions were based solely on heavyweight components. Swing components, in general, have much more capabilities than the AWT corresponding ones. For instance some Swing components (such as labels and buttons) can display icons, while the corresponding AWT components cannot. Swing provides a number of additional components that were not provided by AWT.
Swing provides several standard components (e.g. buttons, checkbuttons, radiobuttons, menus, lists, labels, and text areas) to create a program’s GUI using one or more containers (e.g. frames, dialogs, windows and tool bars). Most Swing components that begin with J, except the top-level containers, are subclasses of the JComponent class. The letter J is used to differentiate the actual extra user interface classes provided by Swing from the support classes that it provides. Swing components inherit many features from the JComponent class, such as a configurable look and feel, borders, and tool tips, as well as many methods. In addition, some Swing components can display images on them. The JComponent class extends the Container class (provides support for adding and laying out components), which itself extends the Component class (provides support for painting, events, layout etc.). JComponent is the base class for almost all lightweight J components. Therefore, all swing lightweight components (derived from the JComponent class) are subclasses of the Container class of AWT.
The following article, by Bill Harlan, provides some useful information about Improving Swing Performance.
Containers
Every program with a Swing GUI contains at least one top-level Swing container, i.e., in general, an instance of a JApplet, JFrame, or JDialog, which enables the painting and event handling of the Swing components. The JApplet, JFrame, or JDialog, are considered top-level containers because they are used to provide an area in which the other containers and components can appear. Other containers, such as the JPanel, are used to facilitate the positioning and sizing of other containers and components. Between a top-level container and an intermediate container, a content pane (JRootPane), which is also an intermediate container, is indirectly provided. The content pane contains all of visible components of its GUI, except a menu bar. One of the add() methods of a container can be used to add a component to it. The component to be added is used as argument to the add method. In some cases another argument, which has to do with the layout, is used with the add() method.
Besides the top-level containers (JApplet, JFrame, JDialog, and JWindow) there are special-purpose containers (such as the JInternalFrame, JLayeredPane, and JRootPane) that are used for special purposes, and general-purpose containers (such as JPanel, JScroll Pane, JSplitPane, JTabbedPane, and JToolBar) that are used for other any general purpose.
The JApplet and JFrame classes should be used to implement Swing applets or applications, respectively. Both JApplet and JFrame classes are containers that contain an instance of the JRootPane class. The latter contains the content pane, which is a container, that contains all the components contained in the applet or application. Therefore, components should be added and layout managers should be set to the content pane.
Atomic Components
There are many atomic components (i.e. components that exist as self-sufficient entities and not to hold other Swing components). Atomic components can be grouped into 3 categories:
- Basic Controls, which are components that can primarily be used to get input from the user (such as JSlider_, _ JTextField_, _ JButton_, _ JComboBox_, _ JList_,_ and JMenu)
- Uneditable Information Displays, which are used to display information to the user (such as JProgressBar, JLabel_,and _JToolTip)
- Editable Displays of Formatted Information, which are components that can be used to display formatted information that can be editable by the user (such as JColorChooser_, _ JTextComponent_, _ JFileChooser_, _ JTable, and JTree)
Applets and Applications
Swing applets use the JApplet class which is a subclass of the AWT Applet class and implements the Accessible and RootPaneContainer interfaces. The following simple example shows an applet with a single component a JLabel_._ The BorderLayout manager is used by the JApplet for its content pane (which is used in both Swing applets and applications), in contrast to the FlowLayout manager used by the Applet class.
SwingApplet1.java
import java.awt.*;
import javax.swing.*;public class SwingApplet1 extends JApplet
{
public void init()
{
Icon icon = new ImageIcon(“1.124.gif”, “1.124 GIF”);
JLabel label = new JLabel(” Test”, icon, SwingConstants.CENTER);
getContentPane().add(label);
}
}
The above applet can be executed using an html file like the following:
SwingApplet1.html
<html>
<head>
<title>JApplet Example # 1</title>
</head>
<body><center>
<h2>
<u>JApplet Example # 1</u></h2>
<hr><applet code=“SwingApplet1.class” WIDTH=“486” HEIGHT=“94”></applet>
<hr></center><p><br>
</body>
</html>
Then, appletviewer (using the command: appletviewer SwingApplet1.html) displays the following window:
Similarly, the JFrame class can be used to create a Swing application similar to the above applet. The JFrame class is a subclass of the AWT Frame class and implements the Accessible_,_ WindowConstants_,_ and RootPaneContainer interfaces. It uses the BorderLayout manager for its content pane. The source code is presented below:
SwingApplication1.java
import java.awt.*;
import javax.swing.*;
import java.awt.event.*;public class SwingApplication1 extends JFrame
{public SwingApplication1()
{
super(“Swing Application”);
Icon icon = new ImageIcon(“1.124.gif”, “1.124 GIF”);
JLabel label = new JLabel(” Test”, icon, SwingConstants.CENTER);
getContentPane().add(label);
}
public static void main(String args[])
{
final JFrame jfr = new SwingApplication1();jfr.setBounds(100,50,500,150);
jfr.setVisible(true);jfr.setDefaultCloseOperation(DISPOSE_ON_CLOSE);
jfr.addWindowListener(new WindowAdapter()
& #160; {
& #160; public void windowClosed(WindowEvent e)
& #160; {
& #160; ; System.exit(0);
& #160; }
& #160; } );
}}
As any Java® application, a main() method must be provided. In main() a JFrame is instantiated, sized, and made visible. The default close operation is set to DISPOSE_ON_CLOSE (to dispose any native resources when the window is closed), and a window listener is added in order to exit the application as soon as the frame is closed.
Running the Java® interpreter (using java SwingApplication1) will execute the program and give the following window:
Often, the source code file can be written in a way that it can be used both as an applet and an application. The above example is rewritten in way to facilitate such a use. The source code is provided below:
AppletApplication1.java
import java.awt.*;
import javax.swing.*;
import java.awt.event.*;public class AppletApplication1 extends JApplet
{public void init()
{
Icon icon = new ImageIcon(“1.124.gif”, “1.124 GIF”);
JLabel label = new JLabel(” Test”, icon, SwingConstants.CENTER);
getContentPane().add(label);
}
public static void main(String args[])
{
final JFrame myFrame = new SwingApplication1();
JApplet myApplet = new AppletApplication1();
myApplet.init();myFrame.setContentPane(myApplet.getContentPane());
myFrame.setBounds(100,50,500,150);
myFrame.setVisible(true);myFrame.setDefaultCloseOperation(WindowConstants.DISPOSE_ON_CLOSE);
myFrame.addWindowListener(new WindowAdapter()
& #160; {
& #160; public void windowClosed(WindowEvent e)
& #160; {
& #160; ; System.exit(0);
& #160; }
& #160; } );
}}
Mixing AWT and Swing
Attention should be given when mixing AWT and Swing components, since when lightweight components overlap with heavyweight components, the heavyweight components are always painted on top. In general, it should mixing AWT and Swing components should not be mixed whenever possible. The “depth” at which components are displayed in a container is represented by the Z-order. The latter is determined by the order with which each component is added to the container, i.e. the first component to be added to a container has the highest Z-order which means that it is displayed in front of all other components added to that container. When lightweight and heavyweight components are mixed, the lightweight components, which need to reside in a heavyweight container, have the same Z-order of their container, and within it the order of each of the lightweight components is determined by the order in which they are added to the container. Note, that whenever a container is extended and the paint() method is overridden, the superclass paint() method should be explicitly invoked using super.paint() to force drawing of the lightweight components.
When Swing popup menus, which are lightweight, are mixed with a heavyweight component, the latter overlaps the former. This can be avoided by forcing the popup menus to be heavyweight using the method setLightWeightPopupEnabled() of the JPopupMenu class. A similar problem happens with a JScrollPane instance when any heavyweight components to it. Because there is no option of setting the JScrollPane as a heavyweight, the AWT ScrollPane can be used instead, which works fine with both lightweight and heavyweight components. Finally, heavyweight components should be avoided (i.e. not be added) to Swing internal frames, i.e. to a JInternalFrame instance.
Layout Managers
Layout managers (such as the BorderLayout_, _ BoxLayout_,_ GridLayout_, _ FlowLayout_, _ CardLayout_, _ GridBagLayout) are used to determine the size and position of the components that are contained in a container. Each container is provided a default layout manager, which, however, can be changed using the setLayout() method and as an argument a newly created instance of the preferred layout manager. The minimum, preferred, and maximum sizes of a component can also be specified to the layout manager, as well as the preferred alignment.
Painting System
The AWT painting system controls the painting of a Swing GUI, starting with the highest component that needs to be repainted and going down the hierarchy of containers, using the event-dispatching thread. The painting of swing components occurs whenever necessary. Swing uses double buffering to improve the efficiency and quality of the provided GUI. During painting, each component paints itself before any of the components that it contains. A Swing top-level container is painted on-screen using one of the methods: setVisible(true), show(), or pack(). Painting starts with the highest component that needs to be repainted and moving down the containment hierarchy, i.e. each component paints itself before any of the components it contains.
Swing performs, by default, double-buffered to ensure smoothness and avoid flushing. Double buffering is essentially implemented by an offscreen buffer in which all painting takes place, and then it is flushed to the screen.
Event Handling
The event handling features of the Swing (and AWT) components allow to a Java® program to respond to external events, such as the interaction of the user with the GUI. Events are handled by event handlers that are registered with event sources. Any Swing component can be notified for the occurrence of any event simply by implement the corresponding interface and registering it as an event listener on a relevant event source. The latter is usually component, e.g. a button. for each event there is a corresponding object which provides all the relevant information. An event handler is implemented by:
- either
- implementing the corresponding listener interface, or
- extending a class that implements that interface
- implement the method in the listener (interface or subclass)
- adding that kind of listener to a component
Sometimes, anonymous inner classes are used to keep the code clearer and the event handler closer to the point where it is registered. The event-handling code executes in the event-dispatching thread (single thread) ensuring that each event handler will execute in sequence allowing a previous event handler finish executing before the next one starts execution. Therefore, the code in the event handlers should be either very short and quick to execute, or another thread should be initiated to execute the code, in order to not harm the performance of the program. Otherwise, painting, which also executes on the event-dispatching thread, will not occur while an event is being handled.
Threads
Swing, in general, is not thread safe, because a Swing component can be accessed by only one thread at a time, which, in general, is the event-dispatching thread. Because Swing is not thread safe, Swing components, in general, should be accessed only from the event-dispatching thread. The event-dispatching thread is the thread that invokes callback methods (e.g. paint() and update() functions) and event handler methods. Swing has been designed to be not thread safe in order to avoid the overhead of multithreading (e.g. obtaining and releasing locks) and to simplify the subclassing of its component. It is not allowed to access Swing components from any thread other than the event-dispatching thread.
However, some Swing components support multithread access. Actually after the realization of a Swing component code that might affect or depend on the state of that component should be executed in the event-dispatching thread. Realization of a component means after the component is available for painting on screen, after it is painted or become ready to be painted using one of the methods: setVisible(true), show(), or pack().
The access only through the event-dispatching thread is not required in the following cases:
- when dealing with thread safe methods (as specified in the Swing API documentation) to construct and show a GUI in the main thread of an application, as long as no components have been realized in the current runtime environment
- constructing and manipulating the GUI in an applet’s init() method, as long as the components have not been made visible, i.e. the method show() or setVisible(true) has never been called on the actual applet object
- methods repaint() and revalidate() are safe to call from any thread
In general, Swing it is not safe to access Swing components from any thread other than the event-dispatching thread. However, there are times that it is preferable to update Swing components from another thread, or perform time-consuming operations on a separate thread, and not use the event-dispatching thread. In those cases, Swing provides the methods invokeLater() and invokeAndWait() in the SwingUtilities class, which can be used to queue a runnable object on the event dispatch thread. They essentially allow a block of code from another thread to be invoked and executed by the event-dispatching thread. Both methods can be used to access Swing components from a thread other than the event-dispatching thread. Method invokeLater() queues the runnable object and returns immediately, while invok eAndWait() waits until the runnable object’s run() method has started before returning. Only the invokeLater() (and not invokeAndWait()) method can be called from the event dispatching thread.
|
common_crawl_ocw.mit.edu_81
|
These notes were prepared by Petros Komodromos.
Topics
In this last recitation more information is provided for Swing components.
1. The JComponent Class
Most Swing components that begin with J, except the top-level containers, are subclasses of the JComponent class. The letter J is used to differentiate the actual extra user interface classes provided by Swing from the support classes that it provides. Swing components inherit many features from the JComponent class, such as a configurable look and feel, borders, and tool tips, as well as many methods. In addition, some Swing components can display images on them. The JComponent class is the base class for almost all lightweight Jcomponents. The JComponent class extends the Container class (provides support for adding and laying out components), which in turn extends the Component class (provides support for painting, events, layout etc.). Therefore, all Swing J-components are AWT containers and inherit all methods from the Container and Component classes. Any instance of a JComponent subclass can contain both AWT and Swing components, since JComponent extends the java.awt.Container class.
The on-line JFC/Swing tutorial provides a summary of the following methods of the JComponent class:
- Customizing Component Appearance Methods Setting Component State Methods
- Handling Events Methods Painting Components Methods
- Dealing with the Containment Hierarchy Methods Laying Out Components Methods
- Getting Size and Position Information Methods Specifying Absolute Size and Position Methods
The JComponent class provides to its subclasses the following functionalities:
- Borders: Any instance of a JComponent subclass can be fitted in one of the several different border styles, such as custom, compound, etched, etc.
- Double buffering: Flickering is avoided by using an offscreen buffer (which is maintained by the RepaintManager) to update components before displaying them on screen.
- Graphics debugging support: Method setDebugGraphicsOptions() of the JComponent class facilitates debugging of graphics by slow motion painting with flushing before each graphics call and with optional diagnostic information.
- Autoscrolling: All JComponent subclasses can autoscroll, i.e. scroll their contents when the mouse is dragged outside their bounds. Autoscrolling is activated, or deactivated, using the method setAutoscrolls() of the JComponent class.
- Tooltips: Strings describing the functionality of a JComponent subclass object can be displayed above it, providing help to the user of the component, whenever the cursor rests over it based on timing characteristics specified by the ToolTipManager class. The method setToolTipText(String text) of the JComponent class is used to associate a tooltip with a Swing component.
- Keystroke handling support: Keystrokes can be specified for JComponent subclass objects, using the method registerKeyboardAction_(ActionListener anAction, String aCommand, KeyStroke aKeyStroke, int aCondition)._ Then, when the specified keystroke is typed, the specified ActionListener’s ActionPerformed() method is invoked if the conditions specified by the last parameter are satisfied.
- Focus management: Pressing the Tab-key in a Swing container, by default, moves focus to the next focusable component. Modifying the JComponent focus properties or replacing the default focus manager allows the modification of the default focus behavior.
- Improved layout support: the JComponent class add setter (and getter) methods to set (and get) some size requests to (from) the layout manager, such as the setPreferredSize(), setMinimumSize(), setMaximumSize(), setAlignmentX(), and setAlignmentY() methods.
- Pluggable look and feel: A corresponding, to each JComponent object, ComponentUI object performs all the drawing and event handling using the current look and feel, which can be set using the UIManager.setLookAndFeel() method.
- Accessibility support: Assistive technology is built into Swing components, enabling software accessibility to everyone, e.g. using magnifying fonts, audio look and feel alternative.
All lightweight Swing components extends the JComponent class, which keeps a reference to a corresponding component UI, or UI delegate. The name of a class delegate is derived from the name of the component by removing the J from the front and adding UI at the end. The UI delegate is responsible for the look and feel and the events handling of a Swing component, e.g. the delegate of the class JLabel is the LabelUI.
For the rendering of lightweight Swing components, the paint() method of the JComponent class is used. In particular, a Graphics object is passed to the paint() method in which it draws the component, the component’s border, and the component’s children in order. When the component has a UI delegate, the delegate’s paint() method is invoked, which clears the background in case of an opaque component, and, then, paints the component. For double buffered components it paints the component into an offscreen buffer and then copies it into the component’s onscreen area. Since double buffered is provided by Swing there is no need to override the paint() method, as it is for AWT to achieve avoid flickering. To redefine the way a component is painted the paintComponent() method of the JComponent class should be overwritten, which usually needs to invoke the super.paintComponent() method.
In contrast to AWT components, it is not necessary to override the update() method, which erases the background of a component and then invokes paint(), to avoid flickering. The JComponent class overrides the update() method invoking directly the paint() method, while the UI delegate is responsible for erasing the background. The flickering is avoided since double buffering is used by the Swing components, i.e. the components are repainted first in an offscreen buffer and, then, copied to the screen. Only JRootPane and Jpanel of the Swing lightweight components are by default double buffered. Components that reside in a double buffered container are automatically double buffered. The method setDoubleBuffered() of the JComponent class can be used to set whether the receiving component should use a buffer to paint. The offscreen buffer used for the double buffering can be obtained using the method RepaintManager.getOffscreenBuffer().
The validate() method of the Container class positions and sizes the components of a container. The revalidate() method of the JComponent class should be invoked by any Swing component that is modified in order to invalidate the component and then invoke the validate() method using the event dispatch thread, which result in the component to be positioned and sized properly. Sometimes, repaint() should also be called after invoking revalidate().
A Swing lightweight component can be opaque, i.e. its background is filled with the component’s background color, or partially transparent, i.e. has transparent background. Swing components are by default opaque, which can be changed using the setOpaque(false) method.
The Graphics subclasss DebugGraphics supports graphics debugging by slowing the rate of graphical operations and flushing prior to each operation. To use graphics debugging the setDebugGraphicsOptions() of the JComponent class should be invoked with one of the debugging options:
- DebugGraphics.LOG_OPTION: causes a text message to be printed.
- DebugGraphics.FLASH_OPTION: causes the drawing to flash several times.
- DebugGraphics.BUFFERED_OPTION: creates an ExternalWindow that displays the operations performed on the View’s offscreen buffer.
- DebugGraphics.NONE_OPTION: the graphics debugging feature is not used
2. Top-Level Containers
A GUI component must be part of a hierarchy with a top-level container as its root to be displayed onscreen. The top-level container classes provided by Swing are the following:
- JFrame: A top-level window with a title and a border.
- JDialog: The main class for creating a dialog window.
- JApplet: Enables applets to use Swing components.
- JWindow: A not, in general, useful window with no controls or title.
A standalone application with Swing components uses at least one containment hierarchy with a JFrame object as its root, while a Swing-based applet uses a containment hierarchy with a JApplet object at its root. Menu bars usually are added, using the setJMenuBar() method, only in frames (JFrame_)_ and in applets (JApplet_)_.
A top-level container has a content pane, which contains the visible components, and, optionally, a menu bar that is positioned within the top-level container, but outside the content pane. To add a component to a container the method getContentPane() can be used to obtain the content pane to which the component can be added using the add() method. The default content pane is a simple intermediate container that extends JComponent, and uses a BorderLayout as its layout manager. Adding/removing components, setting layout managers, adding borders, etc. are invoked on the content pane, which is a field of the JRootpane class. The getContentPane() method returns a Container object, not a JComponent object, which means that to be able to use the content pane’s JComponent features, either the return value must be typecast, or a (opaque) component to be the content pane must be created and used (using the setContentPane method()).
Frames: JFrame
A JFrame object is essentially a window with some additional decorations and functionalities, such as a title, a border, and closing and iconifying buttons.
The following program creates a JFrame object that contains a JLabel object.
import java.awt.*;
import javax.swing.*;
import java.awt.event.*;
public class FrameApplication1
{
public static void main(String args[])
{
JFrame jfr = new JFrame(“My JFrame”);
jfr.addWindowListener(new WindowAdapter()
{
public void windowClosed(WindowEvent e)
{
System.exit(0);
}
} );
jfr.setBounds(100,100,800,200);
JLabel myLabel = new JLabel(" 1.124J: Foundations of Software Engineering “);
myLabel.setPreferredSize(new Dimension(300,80));
jfr.getContentPane().add(myLabel, BorderLayout.CENTER);
jfr.pack();
jfr.setVisible(true);
}
}
Executing the above Java® application (java FrameApplication1) opens the following window with its left top corner at the location (100,100):
Applets: JApplet
Any applet that contains Swing components must be implemented using the JApplet or any subclass of the JApplet class. Applets are typically loaded by web browsers based on references inserted in html files. Upon the creation of an applet, its init() method is called. An applet does not have a close button or a title bar. The browser calls the start() method after calling the init() method, but also every time the page that contains the applet is revisited. The size of an applet is determined by the browser (or the appletviewer) using the size request provided in the corresponding html file.
Each Swing applet has a root pane (JRootPane) which provides additional functionalities, such as support for a menu bar. Components should be added and layout managers should be set to the root pane and not the applet itself. The default layout manager for the content pane of a Swing applet is the BorderLayout, and not the FlowLayout, which is the default layout manager for AWT Applet.
The following Java® source code demonstrates the use of JApplet to create the simplest possible applet. It is followed by the html file, which can be used to load the corresponding class file (TestingJApplet.class). Finally, a snapshot of the resulting window that appears, when the appletviewer is used to load the html file, is presented.
TestingJApplet.java
import java.awt.*;
import javax.swing.*;
public class TestingJApplet extends JApplet
{
public void init()
{
Icon icon = new ImageIcon(“1.124.gif”, “1.124 GIF”);
JLabel label = new JLabel(” Testing JApplet",
icon, SwingConstants.CENTER);
getContentPane().add(label);
}
}
TestingJApplet.html
<html>
<head>
<title>JApplet Testing # 1</title>
</head>
<body>
<center>
<h2>
<u>JApplet Example # 1</u></h2>
<hr><Applet code=“TestingJApplet.class” WIDTH=“500” HEIGHT=“100”></Applet>
<hr></center>
<p><br>
</body>
</html>
> appletviewer TestingJApplet.html
Dialogs: JDialog
Class JDialog extends the AWT Dialog class and implements the WindowConstants, Accessible, and RootPaneContainer interfaces. The JDialog class can be used to create a window that dependents on another window, e.g. to disappear/appear whenever the other window is iconified/deicognified, and has a border and a title bar. The JDialog class, which extends the Dialog class by adding a root pane and support for a default close operation, can be used to create a custom dialog. It is a heavyweight Swing container that contains a JRootPane instance to which components are added and layout managers are set.
To create a dialog using directly the JDialog class requires the creation and laying out of the components as well as creating of the buttons and listeners necessary to dismiss the dialog. The JDialog class has a JOptionPane child, which provides many static methods that can be used to create a variety of standard dialogs. A dialog provided by the JOptionPane is modal, i.e. when it is visible it blocks user input to all other windows in the program. The JDialog class must be used to create a non-modal dialog.
The following example is a Java® application which provides a button in a frame. Whenever the button is pushed a non-modal dialog is created as shown in the snapshot of the program execution, which follows the source code:
import java.awt.*;
import javax.swing.*;
import java.awt.event.*;
public class DialogApplication1
{
public static void main(String args[])
{
MyJFrame jfr = new MyJFrame(“My JFrame”);
jfr.addWindowListener(new WindowAdapter()
{
public void windowClosed(WindowEvent e)
{
System.out.println( “Exiting the program”);
System.exit(0);
}
} );
jfr.setBounds(100,100,800,200);
jfr.setDefaultCloseOperation(WindowConstants.DISPOSE_ON_CLOSE);
JButton myButton = new JButton(“Show JDialog instance”);
myButton.setPreferredSize(new Dimension(300,80));
myButton.addActionListener(jfr);
jfr.getContentPane().add(myButton, BorderLayout.CENTER);
jfr.pack();
jfr.setVisible(true);
}
}
class MyJFrame extends JFrame implements ActionListener
{
public MyJFrame(String title)
{
super(title);
}
public void actionPerformed(ActionEvent e)
{
if (e.getActionCommand().equals(“Show JDialog instance”))
{
System.out.println(“Button has been pushed causing " +
“the appearance of a JDialog”);
JDialog dialog = new JDialog(this, “A Non-Modal Dialog”);
JLabel myLabel = new JLabel(” 1.124J: Foundations of Software Engineering “);
myLabel.setPreferredSize(new Dimension(300,80));
dialog.getContentPane().add(myLabel, BorderLayout.CENTER);
dialog.pack();
dialog.setVisible(true);
}
}
}
Option panes are components that can be placed in dialox boxes. The JOptionPane class allows the creation and customization of several different kinds of dialogs, such as support for laying out standard dialogs, specifying the dialog’s title and text, providing icons, customizing the button text and the components the dialog displays, and specifying where the dialog should appear onscreen. Any one of four standard JOptionPane icons (which indicate question, information, warning, and error) can be displayed by the dialog.
The JOptionPane class allows the easy pop up of a standard dialog box, providing the following methods:
- showOptionDialog(): displays a customized dialog with a variety of buttons, or a collection of components, allowing a selection from a set of options
- showMessageDialog(): provides a simple message with a button to dismiss it,
- showConfirmDialog(): asks for the user’s confirmation providing a number of choices.
- showInputDialog: Shows a question-message dialog requesting input from the user.
The Java® application with the following source code uses the showMessageDialog() method of the JOptionPane class to create a message dialog as it is shown in the snapshot following the source code:
import java.awt.*;
import javax.swing.*;
import java.awt.event.*;
public class DialogApplication2
{
public static void main(String args[])
{
MyJFrame jfr = new MyJFrame(“My JFrame”);
jfr.addWindowListener(new WindowAdapter()
{
public void windowClosed(WindowEvent e)
{
System.out.println( “Exiting the program”);
System.exit(0);
}
} );
jfr.setBounds(100,100,800,200);
jfr.setDefaultCloseOperation(WindowConstants.DISPOSE_ON_CLOSE);
JButton myButton = new JButton(“Show JDialog instance”);
myButton.setPreferredSize(new Dimension(300,80));
myButton.addActionListener(jfr);
jfr.getContentPane().add(myButton, BorderLayout.CENTER);
jfr.pack();
jfr.setVisible(true);
}
}
class MyJFrame extends JFrame implements ActionListener
{
public MyJFrame(String title)
{
super(title);
}
public void actionPerformed(ActionEvent e)
{
if (e.getActionCommand().equals(“Show JDialog instance”))
{
System.out.println(“Button has been pushed causing the invocation” +
“of the JOptionPane.showMessageDialog()method”);
JOptionPane.showMessageDialog(this, “Testing the showMessageDialog() method”);
}
}
}
The JFC/Swing Tutorial provided by Sun has a series of tables with details on the:
- “Showing” Standard Modal Dialogs Methods (using JOptionPane Class) Methods for Using JOptionPanes Directly
- Other JOptionPane Constructors and Methods Frequently Used JDialog Constructors and Methods
Windows: JWindow
JWindow, which extends AWT Window class, provides a window with no resizing controls, border, menu bar, or title that is always on top of every other window. A JWindow instance can be used to display something in a rectangular region (window) without any title or menus that appears on top of other components.
Although an instance of JWindow is a heavyweight component, i.e. it does not inherit capabilities of the JComponent class, it contains an instance of JRootPane, which can be manipulated as any other lightweight component. Actually the main difference of JWindow, from the AWT Window is that the former contains a root pane to which the components should be added and layout managers should be set.
The following program creates a JWindow instance with a JButton instance with an icon on it. When the mouse clicks on the button the actionPerformed() method is called and the program exits after a short message is printed.
import java.awt.*;
import javax.swing.*;
import java.awt.event.*;public class WindowApplication1
{
public static void main(String args[])
{
MyWindow window = new MyWindow();Icon icon = new ImageIcon(“1.124.gif”, “1.124 GIF”);
JButton myButton = new JButton(“Exit”, icon);
myButton.setPreferredSize(new Dimension(300,80));
myButton.addActionListener(window);
window.getContentPane().add(myButton);window.setBounds(100,50,500,150);
window.setVisible(true);
}}
class MyWindow extends JWindow implements ActionListener
{
public void actionPerformed(ActionEvent e)
{
if (e.getActionCommand().equals(“Exit”))
{
System.out.println( “Exiting the program”);
System.exit(0);
}
}
}
The execution of the above application gives a window that looks like the following:
3 . Intermediate Swing Containers
Intermediate Swing components are non-top-level containers that are used to contain other components.
Panels: JPanel
A panel, which is implemented using the JPanel class, is the most frequently used intermediate container. The JPanel class extends the JComponent class, which provides among others double buffering support, and implements the Accessible interface. A JLabel object can display text, an image, or both, but it does not react to input events. Being able to be used as a general-purpose container for lightweight components, but also as a component (e.g. as a canvas) to which text and graphics can be rendered, it is, essentially, the successor of the AWT’s Panel and Canvas classes. A component can be made non-opaque (i.e. transparent) by invoking setOpaque(false) method. The default layout manager of a JPanel object is an instance of the FlowLayout class, which places the panel’s contents in a row. To add components to a panel, the add() method is used.
Example of JPanel
import javax.swing.*;
import java.awt.event.*;
public class MyJPanel extends JPanel
{
MyJPanel()
{
setBackground(Color.cyan);
setPreferredSize(new Dimension(200,200));
}
public void paintComponent(Graphics g)
{
super.paintComponent(g);
g.setColor(Color.red);
g.drawLine(100,100, 350,200);
g.setColor(Color.blue);
g.drawRect(200,50, 50, 20);
g.setColor(Color.black);
g.fillRect(20,150, 90, 150);
g.drawString(“Testing JPanel and Graphics”, 120, 250);
}
public static void main(String args[])
{
final JFrame jfr = new JFrame(“JPanel Example”);
JPanel panel = new MyJPanel();
jfr.getContentPane().add(panel, BorderLayout.CENTER);
jfr.setBounds(200,100,600,400);
jfr.setVisible(true);
jfr.setDefaultCloseOperation(JFrame.DISPOSE_ON_CLOSE);
jfr.addWindowListener(new WindowAdapter()
{
public void windowClosed(WindowEvent e)
{
System.exit(0);
}
} );
}
}
The above example, when executed, gives the following window:
Internal Frames: JInternalFrame
Internal frames are lightweight components to which heavyweight components should not be added. An internal frame is an instance of the JInternalFrame class that provides many of the features of a native frame, such as closing, iconification, title displaying, dragging, resizing, and menu-bar support. Typically, internal frames are contained in JDesktopPane instances.
Components are added and layout managers are set on the content pane of the JInternalFrame. An internal frame needs to be added to a container, since it is not a top-level container. As with a regular frame, the setVisible(true), or the show() method should be invoked on an internal frame to display it.
The size of an internal frame must be set, e.g. using any of the setSize(), pack(), or setBounds() methods, since by default has zero size being invisible. The location of an internal frame may need to be set, since the default is at (0,0), i.e. at the upper left of its container. JInternalFrame, or JOptionPane, should be used to implement Dialogs.
Internal frames can be used in combination with the JDesktopPane to implement Multiple Document Interface (MDI) applications, in which a window acts as a desktop for documents created in the application. Class JDesktopPane extends the JLayeredPane class and implements the Accessible interface. Desktop panes are intended to contain internal frames allowing the development of MDI applications. Desktop managers, which are instances of the DesktopManager, implement the look-and-feel-specific behavior of desktop panes.
Root Panes: JRootPane
A JRootPane is contained in all Swing top-level containers. It is a fundamental component in the container hierarchy, providing to the top-level (heavyweight) containers (JFrame, JDialog, JWindow, and JApplet) the JComponent’s capabilities. Not only these heavyweight, but also the lightweight container JInternalFrame, delegate their operations to a JRootPane instance, which is automatically created as soon as any of these containers is instantiated. Therefore, almost all Swing components reside in a JRootPane instance. The interface RootPaneContainer is implemented by components that have a single JRootPane child: JFrame, JDialog, JWindow, JApplet and JInternalFrame.
A root pane consists of the following components, as shown in the following figure (adapted from the Sun’s Java® Tutorial):
- Glass pane: The glass pane is the topmost component in a root pane. It is hidden, by default, unless it is made visible. It’s also completely transparent unless its paint() method is overwritten. The glass pane, when it is visible, can trap mouse events. If it expresses interest in handling mouse events, by adding a mouse listener or enabling mouse events, it blocks all input events from reaching the components in the content pane, since it is the topmost lightweight component in its container. The glass pane can also be used to paint over an area that already contains one or more components.
- Layered pane : The layered pane, which is underneath the glass pane, can be used to place components on separate layers. The layered pane serves to position its contents, and, optionally, in a specified z-order. A JLayeredPane allows to the components that it contains to be placed in specific layers with which the depth at which the components are displayed is controlled. Each layer is assigned a specific numerical value, which can be explicitly set using the setLayer() method of the JLayeredPane class. Also, the relative depth of a component with respect to other components on the same layer can be specified using the setPosition() method. Positions are specified using an integer between -1 and the number of components at that depth minus one. Overlapping components can appear one on top of the other, displaying the layers with higher values in front of those with lower values. JLayeredPane class specifies certain values to some specific layers, as described in the corresponding API documentation. It does not help to add a heavyweight component in a JLayeredPane, since a heavyweight component is always displayed above a lightweight component. A JLayeredPane instance contains, in turn, a content pane and an optional menu bar.
- Content pane: It is the container of the root pane’s visible components, it contains applet/application’s components.
- Optional menu bar: It is used for the root pane’s container’s menus.
The z-order of components is determined by the order with which they are added to their container. The first component added to a container is displayed on top of all other components.
The RootPaneContainer interface provides assessor methods for the JRootPane and its contents (glass, layered, and content pane and optional menu bar).
Scroll Panes: JScrollPane and JViewport
For scrolling Swing has two lightweight containers, the JScrollPane and the JViewport, one interface, named Scrollable, and one component named JScrollBar.
The JScrollPane is a container that can be used (instead of the AWT ScrollPane ) to display a component that is larger than the available display space. It extends the JComponent class and implements the ScrollPaneConstants, and Accessible interfaces. It contains a JViewport instance. It controls the latter as well as the optional vertical and horizontal scrollbars, and optional row and column header viewports. Using the scrollbars the viewport’s view and the row and column headers can be scrolled. There are, also, corner components, which are by default instances of JPanel with background color that of the scrollpane background.
The following figure (taken from the Java® 2 API) shows the JScrollPane components.
The JViewport is rarely instantiated and used directly. It is used, as shown above, by JScrollPane instances, through which a particular region of a view is displayed. The position of the view displayed by a viewport can be modified in order to allow to different regions of the view to be selectively displayed.
The JScrollBar class can be used in some special cases where it is necessary to implement manual scrolling because the provided scrolling model may not be satisfactory.
Split Panes: JSplitPane
A JSplitPane contains two components separated by a divider, which can be dragged and its position be adjusted. These two components, contained in the JSplitPane can be oriented either vertically (one on top of the other) or horizontally (side by side).
Tabbed Panes: JTabbedPane
A JTabbedPane allows the selection of a component, from the several (usually panels) it may contain, to be displayed. It enables the user to switch between a group of components by clicking on a tab with a given title and/or icon. Which component to be viewed is specified by selecting the tab corresponding to the desired component.
Tool Bars: JToolBar
The Toolbar class, which represents Swing toolbars, is a container that groups several components (of any kind) vertically or horizontally into a column or row, respectively. The user can drag the tool bar to a different edge of its container or out into its own window (unless the floatable property is set to false, making the tool bar immovable).
4. Atomic Components
Atomic components, are components that do not exist to contain any other components. They are subclasses of the JComponent class.
Buttons: JButton
Class JButton extends the AbstractButton class and implements the Accessible interface, providing a button. A Swing button can display not only text but also an image. In addition, a mnemonic, i.e. an underlined letter (from the button’s text) which can be used as a keyboard alternative, can be set and used.
The abstract class AbstractButton is the superclasses not only of the JButton but also of the following: JCheckBox, JRadioButton, JMenuItem, JCheckBoxMenuItem, JCheckBoxMenuItem, and JToggleButton. Class AbstractButton provides most of the functionality of all these classes. It extends the JComponent class and implements the ItemSelectable and SwingConstants interfaces.
An action listener, can be registered to a button enabling event handling, which is notified every time the user clicks the button by invoking the actionPerformed() method.
Example
import java.awt.*;
import javax.swing.*;
import java.awt.event.*;
public class AppletApplicationBut extends JApplet
implements ActionListener
{
protected JButton button1;
protected Icon icon;
public void init()
{
Icon icon = new ImageIcon(“1.124.gif”);
button1 = new JButton(“Welcome!”, icon);
button1.setVerticalTextPosition(AbstractButton.BOTTOM);
button1.setHorizontalTextPosition(AbstractButton.CENTER);
button1.setActionCommand(“welcome message”);
button1.setMnemonic(KeyEvent.VK_M); // Alt-M
button1.addActionListener(this);
getContentPane().add(button1, BorderLayout.CENTER);
}
public void actionPerformed(ActionEvent e)
{
if(e.getActionCommand().equals(“welcome message”))
{
System.out.println(“Welcome to 1.124!”);
}
}
public static void main(String args[])
{
JFrame myFrame = new JFrame(“Button Example”);
JApplet myApplet = new AppletApplicationBut();
myApplet.init();
myFrame.setContentPane(myApplet.getContentPane());
myFrame.setBounds(100,150,500,300);
myFrame.pack();
myFrame.setVisible(true);
myFrame.setDefaultCloseOperation(WindowConstants.DISPOSE_ON_CLOSE);
myFrame.addWindowListener(new WindowAdapter()
{
public void windowClosed(WindowEvent e)
{
System.exit(0);
}
} );
}
}
Execution of the program:
Toggle Buttons: JToggleButton
The JToggleButton class can be used to create toggle buttons, i.e. two-state buttons that can be selected and deselected. The JToggleButton class also serves as the base class for the JRadioButton and JCheckBox classes.
Check Boxes: JCheckBox and JCheckBoxMenuItem
The JCheckBox class can be used to create check box buttons, typically, to represent choices that are not mutually exclusive. Any number of check boxes in a group of check boxes, or even none, can be selected. Class JCheckBox extends JToggleButton class and implements the Accessible interface.
The JCheckBoxMenuItem class, which is a JMenuItem subclass, can be used to put check boxes in a menu.
Radio Buttons: JRadioButton and JRadioButtonMenuItem
Radio buttons are implemented in Swing using the JRadioButton class, typically, to represent choices that are mutually exclusive. Only one radio button, at a time, can be selected from a group of radio buttons.
The class JRadioButtonMenuItem, which is a JMenuItem subclass, can be used to put a radio button in a menu.
Menus: JMenu, JMenuItem, JCheckBoxMenuItem***,*** and JRadioButtonMenuItem
Swing provides components for menus (e.g. menus within menu bars and pop-up menus), and for menu items. Class JMenuItem extends the AbstractButton class and implements the Accessible and MenuElement interfaces.
A menu item, which is an instance of a JMenuItem or one of its subclasses, is essentially a button, as well as a menu is. The latter, i.e. the menu, is an instance of the JMenu class or one of its subclasses. Menu items, since they are buttons, send action events when activated. In addition, they can send MenuDragMouseEvents when a mouse is dragged over them, and MenuKeyEvents when a key is pressed, released, or pressed.
An accelerator, which offers keyboard shortcuts to bypass navigating the menu hierarchy, can be specified for menu items. A mnemonic, which offers a way to use the keyboard to navigate the menu hierarchy, is a key which is pressed in conjunction with a Meta key.
The following image (adapted from the Sun’s Swing/JFC Tutorial) presents the menu-related inheritance hierarchy:
Since a menu is a lightweight component, a component of any type can be added to it. A menu that resides in a menu bar is considered as a top-level menu, while a menu contained in a menu is considered a cascading menu.
Menus are buttons with a pop-up menu associated with them, which is displayed beneath the menu whenever the menu is activated. Items can be added, inserted, or removed from menus using the JMenu methods. The JMenu class extends the JMenuItem class and implements the Accessible and MenuElement interfaces. Components and separators can also be added to JMenu instances. Menu listeners and be added and removed from JMenu instances.
The JCheckBoxMenuItem and the JRadioButtonMenuItem classes are extensions of the JMenuItem subclass.
A menu bar, implemented by the JMenuBar class, contain a row of menus and may be used both in applications and applets. Class JMenuBar extends the JComponent class and implements the Accessible and MenuElement interfaces.
Pop-up menus are implemented in Swing using the JPopupMenu class, which is a subclass of the JComponent class and implements the Accessible and MenuElement interfaces. Pop-up menus are usually shown in response to a pop-up trigger, i.e. a sequence of events, which depends on the window system.
Labels: JLabel
Instances of the JLabel class can be used to display images and unselectable text. The JLabel class extends JMenuItem and implements the Accessible interface.
Although the JLabel and JButton classes are not related by inheritance, they have many similar methods and identical properties. A menu usually appears either in a menu bar, contains one or more menus, or as a pop-up menu, which becomes visible only when the user makes a platform-specific mouse action.
Separators: JSeparator
Separators are used to separate components or sets of components. They are implemented in Swing using the class JSeparator which extends the JComponent class and implements the Accessible and SwingConstants interfaces.
Monitor Progress: JProgressBar
Swing provides the JProgressBar class to graphically indicate to the user about how much of a certain task has been performed and how much it might take to finish. JProgressBar extends the JComponent class and implements the Accessible and SwingConstants interfaces.
However, JProgressBar and ProgressMonitorInputStream utilities can also be used to provide the percentage of a time-consuming task.
Sliders: JSlider
A slider displays a value, which is between a minimum and a maximum value, that can be changed by dragging the slider’s knob or clicking on the slider. A slider can be implemented in Swing using the JSlider class, which is a subclass of the JComponent class and implements the Accessible and SwingConstants interfaces.
Methods of the JSlider class can be used to set (major and minor) tick marks, which indicate specific values associated with the slider, and labels, which are displayed on major tick mark locations, on a slider.
File Choosers: JFileChooser
File choosers are lightweight components, instances of the class JFileChooser, that are placed in a dialog or a container providing a GUI for selecting a file or navigating the filesystem. File choosers support files only, directories only, or, files and directories display modes. Class JFileChooserextends the JComponent class and implements the Accessible interface. File choosers also support single and multiple selection.
Although a file chooser displays, by default, all files and directories, file filtering can be applied to a file chooser in order to selectively show only some certain files, e.g. based on the file type. Method setFileHidingEnabled(false) can be use to show hidden files, which by default are not shown.
File choosers can be customized in many different ways using the methods of the JFileChooserclass. They can also accommodate accessory components to show the contents of selected files, or just to make themselves fancier.
Color Choosers: JColorChooser
Color choosers, represented by the JColorChooser class, can be used to provide to users with a palette of colors and allow them to manipulate and choose a color. Class JColorChooser extends the JComponent class and implements the Accessible interface. A color chooser contains a set of color chooser panes in a tabbed pane and a preview panels that displays the selected color.
Combo Boxes: JComboBox
Combo boxes, which are implemented using the JComboBox class, contain an editable area and a drop-down list of selectable items. Class JComboBox extends the JComponent class and implements the Accessible,ItemSelectable, ListDataListener and ActionListener interfaces.
Both JComboBox and JList classes can be used to display a list of items. However, combo boxes can have an editor, while list cells are not editable. In addition, combo boxes support keys selections, while lists do not.
Lists: JList
Swing Lists, implemented using the JList class, cab be used to display a list of selectable objects, from which the user can choose. Class JList extends the JComponent class and implements the Accessible, and Scrollable interfaces. Although the JList class does not provide scrolling support, instances of JList are almost always placed in scrollpanes. An instance of the ListSelectionModel class, which is used by a list to manage its selection, allows any combination of items to be selected at a time.
Tables: JTable
Tables, implemented by the JTable class, can be used to display tables, i.e. rows and columns, of data. The user is, optionally, allowed to edit the data. A number of selection modes, e.g. column, cell and cell selection, are supported by Swing tables. Tables consists of a table header, column headers, and columns with cell values.
A package, the javax.swing.table, that contains all interfaces and classes related to tables is provided in Swing. The main Swing class associated with tables is the class JTable, which extends the JComponent class and implements the Accessible, TableModelListener, Scrollable, TableColumnModelListener, ListSelectionListener, CellEditorListener interfaces.
Almost always, a tables is contained in a scroll pane, which automatically gets the table’s header (with the column names) and puts it on top of the table. The column names remain visible even when the user scrolls down.
Trees: JTree
Trees, which are implemented by the JTree class, can be used to display hierarchical data using folders and leaf items. Class JTree extends the JComponent class and implements the Accessible, and Scrollable interfaces.
Every tree has a root node from which all nodes, which may have children themselves, descend. The user can expand and collapse branch nodes, nodes that may have children, by double clicking on them, or by clicking on the folder’s handle.
Tool Tips: JToolTip
Tooltips, which are implemented by the class JToolTip, are used to display a tip for a component. Class JToolTip the JComponent class and implements the Accessible interface. The method setToolTipText(String text) of the JComponent class is used to associate a tooltip with a Swing component.
With tool tips, strings describing the functionality of a JComponent subclass object can be displayed above it, providing help to the user of the component, whenever the cursor rests over it based on timing characteristics specified by the ToolTipManager class.
Text Components: JTextComponent, JTextField, JTextArea, JEditorPane, JTextPane, and JPasswordField
Text components display text, which can, optionally, be editable by the user. Swing provides 5 classes that are sublclasses of the JTextComponent class. JTextComponent extends the JComponent class and implements the Accessible, and Scrollable interfaces.
The following figure, (adapted from the Sun’s Java® Tutorial) shows the hierarchy of the JTextComponent, which is the base class for swing text components.
- JTextField allows the editing of a single line of text.
- JTextArea is a multi-line area that displays plain text.
- JEditorPane enables the editing of various kinds of content.
- JTextPane is a lightweight text component that can be marked up with attributes that are represented graphically.
- JPasswordField allows the editing of a single line of text while not showing the original characters.
|
common_crawl_ocw.mit.edu_82
|
Course Meeting Times
Lectures: 2 sessions / week, 1.5 hours / session
Course Description
1.151 is a first-year graduate subject, very similar in content to 1.010, which is a sophomore-level undergraduate subject. Both aim at introducing students to quantitative uncertainty analysis and risk assessment for engineering applications. The subjects cover similar material, but the 1.151, the graduate version, includes additional topics (such as system reliability) and is faster-paced and more in-depth. The undergraduate course includes weekly recitations mainly to solve problems, review material presented in class, and engage students in bi-weekly 30-minute mini-quizzes. Along with these small quizzes, there is a final exam.
Both subjects try to strike a balance between mathematical rigor and applications. No previous familiarity with probability or statistics is assumed. However, students should be conversant with basic linear algebra (vectors and matrices) and calculus (derivatives, and integrals).
Emphasis is on probability theory and its applications, with a smaller module at the end covering basic topics in statistics (parameter estimation, hypothesis testing and regression analysis). The probability part includes events and their probability, the Total Probability and Bayes’ Theorems, discrete and continuous random variables and vectors, the Bernoulli trial sequence and Poisson process models, conditional distributions, functions of random variables and vectors, statistical moments, second-moment uncertainty propagation and second-moment conditional analysis, and various probability models such as the exponential, gamma, normal, lognormal, uniform, beta and extreme-type distributions. In addition, the graduate subject has a module on system reliability, which covers both second-moment and full-distribution techniques. Throughout the subjects, emphasis is on application to engineering and everyday life problems.
Recommended Text
The recommended text for this class is:
Ang, Alfredo, and Wilson Tang. Probability Concepts in Engineering Planning and Design: Vol I - Basic Principles. New York, NY: John Wiley & Sons, 1975. ISBN: 047103200X.
|
common_crawl_ocw.mit.edu_83
|
Course Description
The main objective of this course is to give broad insight into the different facets of transportation systems, while providing a solid introduction to transportation demand and cost analyses. As part of the core in the Master of Science in Transportation program, the course will not focus on a specific transportation …
The main objective of this course is to give broad insight into the different facets of transportation systems, while providing a solid introduction to transportation demand and cost analyses. As part of the core in the Master of Science in Transportation program, the course will not focus on a specific transportation mode but will use the various modes to apply the theoretical and analytical concepts presented in the lectures and readings.
Introduces transportation systems analysis, stressing demand and economic aspects. Covers the key principles governing transportation planning, investment, operations and maintenance. Introduces the microeconomic concepts central to transportation systems. Topics covered include economic theories of the firm, the consumer, and the market, demand models, discrete choice analysis, cost models and production functions, and pricing theory. Application to transportation systems include congestion pricing, technological change, resource allocation, market structure and regulation, revenue forecasting, public and private transportation finance, and project evaluation; covering urban passenger transportation, freight, aviation and intelligent transportation systems.
|
common_crawl_ocw.mit.edu_84
|
Course Description
The class will cover quantitative techniques of Operations Research with emphasis on applications in transportation systems analysis (urban, air, ocean, highway, pick-up and delivery systems) and in the planning and design of logistically oriented urban service systems (e.g., fire and police departments, emergency …
The class will cover quantitative techniques of Operations Research with emphasis on applications in transportation systems analysis (urban, air, ocean, highway, pick-up and delivery systems) and in the planning and design of logistically oriented urban service systems (e.g., fire and police departments, emergency medical services, emergency repair services). It presents a unified study of functions of random variables, geometrical probability, multi-server queueing theory, spatial location theory, network analysis and graph theory, and relevant methods of simulation. There will be discussion focused on the difficulty of implementation, among other topics.
Course Info
Learning Resource Types
assignment_turned_in
Problem Sets with Solutions
grading
Exams with Solutions
notes
Lecture Notes
menu_book
Online Textbook
|
common_crawl_ocw.mit.edu_85
|
Course Description
Approaching transportation as a complex, large-scale, integrated, open system (CLIOS), this course strives to be an interdisciplinary systems subject in the “open” sense. It introduces qualitative modeling ideas and various techniques and philosophies of modeling complex transportation enterprises. It also …
Approaching transportation as a complex, large-scale, integrated, open system (CLIOS), this course strives to be an interdisciplinary systems subject in the “open” sense. It introduces qualitative modeling ideas and various techniques and philosophies of modeling complex transportation enterprises. It also introduces conceptual frameworks for qualitative analysis, such as frameworks for regional strategic planning, institutional change analysis, and new technology development and deployment. And it covers transportation as a large-scale, integrated system that interacts directly with the social, political, and economic aspects of contemporary society. Fundamental elements and issues shaping traveler and freight transportation systems are covered, along with underlying principles governing transportation planning, investment, operations, and maintenance.
Learning Resource Types
notes
Lecture Notes
assignment_turned_in
Written Assignments with Examples
|
common_crawl_ocw.mit.edu_86
|
Reading material for this subject consists of three elements:
- Text: Introduction to Transportation Systems by Joseph M. Sussman
- A reader, and
- Various handouts given in class
Required Reading
Text
Sussman, Joseph. Introduction to Transportation Systems. Norwood, MA: Artech House Publishers, 2000. ISBN: 1580531415.
As students of transportation, we would expect you will read every word of this text at least once. How the chapters in the text are keyed to the lectures is noted below. As this text grew out of the teaching of this subject, it supports and supplements the lectures.
A second text by the instructor may also be useful:
Sussman, Joseph. Perspectives on Intelligent Transportation Systems (ITS). New York, NY: Springer, 2005. ISBN: 0387232575.
Reader
The materials below outside of the text are considered the reader for this subject. Materials in the reader are all required reading.
Handouts
We will regularly distribute articles and newspaper clippings in class. By doing this, we hope to keep you abreast of current and developing issues in transportation as well as encourage you to get in the habit of reading newspapers and magazines from a transportation perspective. Sometimes we will include an article not explicitly on transportation to stretch your thinking. Occasionally we will ask that you read an article in preparation for discussion in the next class and, in these cases, the material should be considered required. However, most of the time these articles are provided strictly for your information and required reading should take higher priority. In addition, from time to time an article will be distributed “just for fun”. We hope that it will be clear to you which ones these are.
Required Reading List by Block
I. Introduction / Philosophy / Basic Transportation Systems Concepts (Lectures 1-10)
Text:
Introduction to Transportation Systems. Chapters 1-11.
Reader:
Sussman, Joseph M. “Educating the ‘New Transportation Professional.’” ITS Quarterly (Summer 1995): 3-10.
TRB Executive Committee. “Critical Issues in Transportation 2002.” TR News 217 (November-December 2001): 4-11.
II. Freight Transportation (Lectures 11-18)
Text:
Introduction to Transportation Systems. Chapters 12-20.
Reader:
Martland, Carl D. “Rail Freight Service Productivity from the Manager’s Perspective.” Transportation Research 26A, no. 6 (1992): 457-469.
Sussman, Joseph M. “Transportation’s Rich History and Challenging Future–Moving Goods.” Transportation Research Circular 461 (August 1996): 13-19.
III. Traveler Transportation (Lectures 19-27)
Text:
Introduction to Transportation Systems. Chapters 21-30.
Reader:
Hoel, Lester A. “Historical Overview of U.S. Passenger Transportation.” Transportation Research Circular 461 (August 1996): 7-11.
Thompson, Louis S. “High Speed Rail in the United States–Why Isn’t There More?” Japan Railway and Transport Review 3 (October 1994): 32-39.
“To Travel Hopefully: A Survey of Commuting.” The Economist, Special Section (September 5, 1998): 1-18.
IV. Summary (Lecture 28)
Reader:
Sussman, Joseph M. “Transitions in the World of Transportation.” Transportation Quarterly 56, no. 1 (Winter 2002). Eno Transportation Foundation, Washington, DC, 2002.
|
common_crawl_ocw.mit.edu_87
|
Course Description
This course examines the policy, politics, planning, and engineering of transportation systems in urban areas, with a special focus on the Boston area. It covers the role of the federal, state, and local government and the MPO, public transit in the era of the automobile, analysis of current trends and pattern breaks; …
This course examines the policy, politics, planning, and engineering of transportation systems in urban areas, with a special focus on the Boston area. It covers the role of the federal, state, and local government and the MPO, public transit in the era of the automobile, analysis of current trends and pattern breaks; analytical tools for transportation planning, traffic engineering, and policy analysis; the contribution of transportation to air pollution, social costs, and climate change; land use and transportation interactions, and more. Transportation sustainability is a central theme throughout the course, as well as consideration of if and how it is possible to resolve the tension between the three E’s (environment, economy, and equity). The goal of this course is to elicit discussion, stimulate independent thinking, and encourage students to understand and challenge the “conventional wisdom” of transportation planning.
Learning Resource Types
assignment
Problem Sets
notes
Lecture Notes
Instructor Insights
|
common_crawl_ocw.mit.edu_88
|
Course Description
1.34 focuses on the geotechnical aspects of hazardous waste management, with specific emphasis on the design of land-based waste containment structures and hazardous waste remediation. Topics include: introduction to hazardous waste, definition of hazardous waste, regulatory requirements, waste characteristics, …
1.34 focuses on the geotechnical aspects of hazardous waste management, with specific emphasis on the design of land-based waste containment structures and hazardous waste remediation. Topics include: introduction to hazardous waste, definition of hazardous waste, regulatory requirements, waste characteristics, geo-chemistry, and contaminant transport; the design and operation of waste containment structures, landfills, impoundments, and mine-waste disposal; the characterization and remediation of contaminated sites, the superfund law, preliminary site assessment, site investigation techniques, and remediation technologies; and monitoring requirements.
Course Info
Learning Resource Types
grading
Exams
notes
Lecture Notes
assignment
Written Assignments
|
common_crawl_ocw.mit.edu_89
|
Course Meeting Times
Lectures: 2 sessions / week, 1.5 hours / session
Content
The course covers two topic areas:
- Investigation and remediation of contaminated sites
- Design and construction of waste-disposal sites
The course emphasizes the practical engineering aspects of these topics, but also covers theoretical aspects of mass transport in the subsurface and how it is important to these topics.
Grading
You can complete an optional paper to substitute for one exam grade but you must take all exams.
Regular attendance and class participation will be considered in assigning final grades.
Collaboration on homework is permitted — please list your collaborators.
Reading
Please complete the reading before each lecture. Some parts of the reading are labeled “skim” in the syllabus below. My goal for you is to be familiar with these documents and their general contents.
You are requested to read a book for the final lecture: Harr, Jonathan. A Civil Action. New York: Random House, 1995.
If you have not taken course 1.72, Ground Water Hydrology, you should review basic principles of the subject. Please see me for suggested reading.
References
Fetter, C. W. Contaminant Hydrogeology. 2nd ed. New Jersey: Prentice-Hall Inc., Upper Saddle River, 1999.
An excellent reference on contaminants in ground water.
Qian, X. , R. M. Koerner, and D. H. Gray. Geotechnical Aspects of Landfill Design and Construction. New Jersey: Prentice Hall, Upper Saddle River, 2002.
Comprehensive technical reference on landfills.
McBean, E. A. , F. A. Rovers, and G. J. Farquhar. Solid Waste Landfill Engineering and Design. New Jersey: Prentice Hall PTR, Upper Saddle River, 1995.
Although not quite as good a reference as Qian et al., 2002, this is also a good reference and a bargain in a paperback edition available from the Barnes and Noble web site.
Daniel, D. E. Geotechnical Practice for Waste Disposal. London: Chapman and Hall, 1993.
A comprehensive reference that was the course text in past years. It is a bit out of date on remediation technologies, but still a very good reference.
|
common_crawl_ocw.mit.edu_90
|
Figure S4.1 shows how global food production, agricultural inputs, and total population have changed over the period 1960–2010, with all variables expressed as multiples of their 1960 values. The figure was constructed for this class and is based on UN Population Division Data and FAOSTAT data (see S1 and S3). Global population is projected to 2050 in the left plot to provide context. This plot shows that global cereal and meat production grew significantly faster than global population, water diversions, and total cropland. The implication is that per capita global food energy and protein derived from cereals and meat have increased, as well as water use efficiency and aggregate cereal yield.
The plot on the right (note the expanded scale) superimposes trends in nitrogen fertilizer and pesticide use, which have grown much faster than cereal production. This result suggests that global nitrogen and pesticide use efficiencies have decreased substantially since 1960. Overall, these results reflect both the success and environmental impact of the twentieth century ‘Green Revolution’, which has made it possible to feed a growing global population through higher yields driven partly by increased inputs and partly by the development of better crop cultivars.
Figure S4.1 Trends in global food production, agricultural inputs, and population
Another picture emerges in Figure S4.2, which compares meat, cereal, and population averages for the globe and for East Africa (note the change in the vertical scale on the right due to much higher population growth in East Africa). East African cereal production has barely kept up with population growth and meat production has lagged behind. Also, it appears that domestic production in East Africa will need to increase much faster in the future if it is to meet the demand of the rapidly increasing population.
Figure S4.2 Comparison of global and East African trends
Figure S4.3 from FAOSTAT shows the marked differences between conditions in developed and developing countries. These are illustrated in comparisons per capita calorie and protein consumption (total consumption before food losses and waste) from 1960 to 2010. The plots clearly reveal how much higher the USA and EU consumption is than the global average.
Figure S4.3 Food consumption trends (energy and protein) in different regions (FAOSTAT, 2019)
© FAO. All rights reserved. This content is excluded from our Creative Commons license.
For more information, see https://ocw.mit.edu/help/faq-fair-use/.
Although the global average has been above the UN recommended minimum for some time, the average energy and protein values for the UN’s “least developed country” group have only risen above this threshold since 2000. Figure S4.4 shows that a significant fraction of residents in these countries is still undernourished (FAO, IFAD and WFP, 2014).
Figure S4.4 Malnutrition trends and distribution (FAO, IFAD and WFP, 2014)
© FAO. All rights reserved. This content is excluded from our Creative Commons license.
For more information, see https://ocw.mit.edu/help/faq-fair-use/.
The chart in the upper left of Figure S4.4 shows an approximate distribution of global energy consumption (in calories), with undernourished, overweight, and obesity percentages indicated. The trend plot in the upper right shows a gradual decline in the absolute numbers and percentage of undernourished people but the number given for 2011 is still above 800 million. The lower chart maps the percentages of undernourished in national populations. It gives estimates of around 30% in much of sub-Saharan Africa. Many of the undernourished in this region are children who suffer long term effects from shortages in critical nutrients.
Figure S4.5a (Gapminder, 2019) shows the strong correlation between calorie intake and per capita income, by country (shown with colored circles). The population of each country is indicated by the area of its circle. Figure S4.5b shows a similar plot of calorie intake vs. the water availability, which is chosen as an example of a natural resource that might limit food production. Note that low income countries with plentiful water can still have low calorie intake while high income countries with scarce water can still have high calorie intake. In short, income seems to be a stronger determinant of access to food than availability of natural resources, because richer countries can afford to import food.
Figure S4.5 Per capita calorie intake for various countries vs. a) income and b) water availability (Gapminder, 2019).
© Google. All rights reserved. This content is excluded from our Creative Commons license.
For more information, see https://ocw.mit.edu/help/faq-fair-use/.
References:
FAO, IFAD and WFP. 2014. The State of Food Insecurity in the World 2014. Strengthening the Enabling Environment for Food Security and Nutrition (PDF - 3MB). Rome, FAO.
Gapminder Tools site. 2019.
|
common_crawl_ocw.mit.edu_91
|
Lambin and Meyfroidt (2011) from Class 2 and many others believe that rain forests should not be included in an inventory of available cropland. Nevertheless, rain forest is still being converted to cropland at high rates.
These figures from Achard et al. (2002) show that the lost forest area in some areas of the Amazon and southeast Asia was up to 50% of total forested land over one decade, 2000–2010. The gross loss of forest cover appears in orange circles while gross loss from other woodland areas appears in yellow circles. The range is 0–100% loss over the decade, indicated by the size of the circles.
Figure S7.1 Lost forest area in the tropics (Achard et al., 2014)
Maps from Achard et al. 2014. “Determination of Tropical Deforestation Rates and Related Carbon Losses
from 1990 to 2010.” Global Change Biology. 20, no. 8: 2540–2554. © The Authors Global Change Biology.
All rights reserved. This content is excluded from our Creative Commons license.
For more information, see https://ocw.mit.edu/help/faq-fair-use/.
The following charts from Hansen et al. (2013) give estimated annual forest loss totals for Brazil and Indonesia from 2000 to 2012. The annual forest loss increment is the slope of the estimated trend line in each chart. Although Brazil’s losses decreased over the plotted period, they are still substantial. Tropical forest loss rates change substantially over time, depending on government policies. It is likely that Brazil’s rates increased at times after 2012.
Figure S7.2 Deforestation trends in Indonesia and Brazil (Hansen et al., 2013)
Figures from Hansen et al. 2013. “High-Resolution Global Maps of 21st-Century Forest Cover Change.”
Science. 342, no. 6160: 850–853. © AAAS. All rights reserved. This content is excluded from
our Creative Commons license. For more information, see https://ocw.mit.edu/help/faq-fair-use/.
Deforestation is not confined to the tropics. This image from Hansen et al. (2013) shows losses in the US and Russia as well.
Figure S7.3 Images showing loss of forest cover in a) Paraguay, b) Indonesia, c) United States, and d) Russia (Hansen et al., 2013)
Figures from Hansen et al. 2013. “High-Resolution Global Maps of 21st-Century Forest Cover Change.”
Science. 342, no. 6160: 850–853. © AAAS. All rights reserved. This content is excluded from
our Creative Commons license. For more information, see https://ocw.mit.edu/help/faq-fair-use/.
References:
F. Achard, H. D. Eva, et al. 2002. “Determination of Deforestation Rates of the World’s Humid Tropical Forests.” Science. 297, no. 5583: 999–1002.
M. C. Hansen, P. V. Potapov, et al. 2013. “High-Resolution Global Maps of 21st-Century Forest Cover Change.” Science. 342, no. 6160: 850–853.
Eric F. Lambin, Patrick Meyfroidt. 2011. “Global Land Use Change, Economic Globalization, and the Looming Land Scarcity.” Proceedings of the National Academy of Sciences. 108, no. 9: 3465–3472.
|
common_crawl_ocw.mit.edu_92
|
These UN population plots are based on historically reported population values before 2020 and on projections based on predicted trends in fertility (measured as total number of children per woman) starting in 2020 and ending in 2100 (UN Population Division, 2019). The total population (all genders and ages) in the designated region is presented as a median value computed from a Monte Carlo technique. The plots also include high and low fertility bounds (+- 0.5 child per woman).
Figure S1.1 UN Estimated Population Trends (UN Population Division. 2019)
(Click each individual figure to see its bigger version.)
Courtesy UN Population Division. License: CC BY.
The global estimates in Figure S1.1 show that the population median peaks around 2100 and the low fertility variant peaks around 2055, after which the population decreases. By contrast, even the low fertility variant in the sub-Saharan Africa plot is still increasing at the end of the century. The sub-Saharan Africa median population increases by a factor of nearly 3.5 between 2020 and 2100, growing from 1.1 to 3.8 billion.
The remaining plots show much slower population trends for the US, China, and high-income countries (primarily Europe and North America). Note that the median value for China peaks around 2030 while the high-income median value peaks around 2045. These results indicate that population will be growing primarily in countries that are currently lower income.
Figure S1.2 shows an estimated global population density map for 2000 (Salvatore et al, 2005). The spatial resolution is 30 arc seconds (1 km2 at the equator). Note the high population density in the generally water limited areas of the Middle East, East Africa, South Asia, and northern China (see the comparison to climate maps shown in S8).
Figure S1.2 Population Density (Salvatore et al., 2005).
© FAO. All rights reserved. This content is excluded from our Creative Commons license.
For more information, see https://ocw.mit.edu/help/faq-fair-use/.
25847661figs13popjpg99471414
Figure S1.3 Fertility rates over three time periods (UN Population Division. 2019).
(Click the figure to see a bigger popup image.)
Courtesy UN Population Division. License: CC BY.
The maps in Figure S1.3 above show UN fertility rate estimates (in number of children per woman) over two periods in the past and for 2050–2055 (UN Population Division, 2019). The demographic transition that reduced population growth in much of Asia and South America between 1970 and 2015 is apparent. Although African fertility rates also dropped over this period they were still much higher than elsewhere. In some African countries the 2010–2015 average fertility rate was well above 5 children per woman. However, the UN predicts that most countries in Africa will have lower fertility rates by 2050–2055. This suggests that the pressure on African food supplies may gradually become less intense in the second half of the twenty-first century, although the population will still be growing.
Figure S1.4 indicates that there are reasonably strong relationships between fertility rate and per capita income and between fertility rate and childhood mortality in different countries (Gapminder, 2019). Here fertility is measured by the number of children per woman and childhood mortality is measured as mortality for ages 0–5 per thousand born. Each country in the chart is indicated with a colored circle, with an area proportional to the national population. Countries with higher income tend to have lower fertility rates while those with higher childhood mortality tend to have higher fertility rates. Population predictions often rely on trends in explanatory variables such as income and childhood mortality to forecast fertility rates.
Figure S1.4 Effects on fertility (number of children per woman) of a) per capita income and
b) childhood (0–5 year mortality per thousand born), in 2012 (Gapminder, 2019)
© Google. All rights reserved. This content is excluded from our Creative Commons license.
For more information, see https://ocw.mit.edu/help/faq-fair-use/.
References:
UN Population Division. 2019. Department of Economic and Social Affairs, Population Dynamics. World Population Prospects 2019.
Mirella Salvatore, Francesca Pozzi, et al. 2005. Mapping Global Urban and Rural Population Distributions. Environment and Natural Resources Working Paper 24, FAO, Rome, 2005.
Gapminder Tools site. 2019.
|
common_crawl_ocw.mit.edu_93
|
This class introduces two opposing perspectives on food security while also considering the broader question of how society can best address critical human needs. Ehrlich and Ehrlich (2013) present a generalization of the Malthus (2008) argument that unrestrained demand will always exceed the capacity of finite natural resources to meet human needs. Lomberg (2001) relies on recent dramatic increases in global food production to argue that technology has, in fact, been able to keep pace with growing demand. These readings are ideological in nature but they provide a useful introduction to issues that are frequently encountered in the course. The accompanying videos provide more detail on some of the arguments invoked by both sides of the debate.
The reading by Godfray et al. (2010) provides a more balanced perspective that acknowledges the seriousness of the food security situation and advocates a “sustainable intensification” strategy for meeting anticipated increases in demand. The suggestions made in the paper are based largely on existing technology, except for some of the mid to long-term genetic engineering proposals, which are quite speculative. Overall, this paper describes what might be considered a middle-of-the-road or “establishment” position in the food security debate.
We conclude with a paper by the Nobel economist Amartya Sen (1982) who comments on differences between “nature focused” (or technological) and “social focused” (or political) perspectives on food security. This paper discusses Malthus in more detail and makes the case that there is “no such thing as an apolitical food problem.” Although this course is “nature focused,” it is indeed difficult to avoid political, economic, and moral issues when discussing food security. We will likely return to points made in this paper.
Required Readings
The Pessimistic Viewpoint
- Paul R. Ehrlich and Anne H. Ehrlich. 2013. “Can a Collapse of Global Civilization Be Avoided?” Proceedings of the Royal Society B: Biological Sciences. 280, no. 1754: 20122845.
The Optimistic Viewpoint
- Bjørn Lomborg. 2001. “Food and Hunger.” Chapter 5 in The Skeptical Environmentalist: Measuring the Real State of the World. Cambridge University Press, 2001. ISBN: 9780521010689.
A More Balanced Viewpoint
- H. Charles J. Godfray, John R. Beddington, et al. 2010. “Food Security: The Challenge of Feeding 9 Billion People.” Science, 327, no. 5967: 812–818.
A “Social Focused” Viewpoint
- Amartya Sen. 1982. “The Food Problem: Theory and Policy.” Third World Quarterly, 4, no. 3: 447–459.
Optional Reading
Food Demand and Production
- Thomas Malthus, 2008. An Essay on the Principle of Population. Oxford University Press, Geoffrey Gilbert (Editor), ISBN: 9780199540457.
Optional Videos
Another Pessimist
- Jonathan Foley. 2011. “The Other Inconvenient Truth: How Agriculture is Changing the Face of Our Planet (YouTube).” TEDx talk, published on YouTube September 2, 2011.
Another Optimist
- Hans Rosling. 2013. “Don’t Panic: The Truth About Population.” Produced by Wingspan Productions and The Open University for the BBC.
Discussion Points
- Can technological advances keep pace with increasing human demand for natural resources? Is a reduction in our demand for food, energy, and goods essential or is it an unnecessary drag on our economy?
- Why do you think different researchers can draw such differing conclusions from some of the same data sources (e.g. UN data)? Which perspective do you find most convincing?
- Do you think the Godfray et al (2010) discussion adequately defends the idea that crop production can be increased substantially without causing adverse environmental consequences? Please elaborate on the reasons for your opinion.
- Do you still find Sen’s comments from 1982 relevant to a discussion of food security? How would you weight or compare the role of science vs. the role of politics in addressing the “food problem”?
|
common_crawl_ocw.mit.edu_94
|
Instructor Interview
Most of the students in Professor Dennis McLaughlin’s course 1.74 Land, Water, Food, and Climate come to it with established opinions on some very controversial topics: whether GMOs are safe, whether climate change is real (and really human-induced), whether organic agriculture is preferable to conventional agriculture, and whether it’s better for land to be worked by individual farmers or by larger corporations. Dealing with topics like these in an introductory graduate-level class can be challenging. You have to train students to read the scientific literature so that they can evaluate the facts on both sides of an issue. But you also have to strike a balance between those concrete facts and the intangible social values that enter into debates on sensitive topics.
In the episode of the Chalk Radio podcast embedded below, Professor McLaughlin describes his approach to those two challenges in teaching 1.74; he also explains how a diversity of backgrounds among the students in the class enriches class discussion, and he describes what he sees as the teacher’s role: to adjust and when necessary reframe the terms of discussion, while still allowing students the freedom to explore the ramifications of their ideas.
In the written insights linked below, Professor McLaughlin describes various aspects of how he teaches 1.74 Land, Water, Food, and Climate.
Course Readings as Catalysts for Discussion
Curriculum Information
Prerequisites
None
Requirements Satisfied
1.74 can be applied toward a Master of Science or Doctor of Philosophy degree in Civil and Environmental Engineering, but is not required.
Offered
Every spring semester
Assessment
Grade Breakdown
The students’ grades were based on the following activities:
- 70% Presentations and general participation
- 30% Research paper
Student Information
Enrollment
12 students
Breakdown by Year
A mix of graduate and undergraduate students
Breakdown by Major
Largely Civil & Environmental Engineering students; some from other fields including Architecture, Urban Studies and Planning, and Management
Typical Student Background
A mix of American and international students, with varying levels of prior exposure to farms and farming
How Student Time Was Spent
During an average week, students were expected to spend 6 hours on the course, roughly divided as follows:
In Class
Met 1 time per week for 3 hours per session; 12 sessions total; mandatory attendance
Out of Class
Students spent time outside of class reading assigned papers and working on their presentations and research projects.
|
common_crawl_ocw.mit.edu_95
|
In this section, Professor McLaughlin shares insights about facilitating a reading seminar focused on discussion.
1.74 Land, Water, Food and Climate is a reading seminar, which means the students read scientific papers that are current and suitable for their level of preparation. They read three or four papers every week, and we spend the class session discussing the papers. Occasionally, I give a little background on things that are in the papers with a short lecture—maybe 15 to 20 minutes—but not every time. This puts a lot of responsibility on the students to read the papers.
It’s a format that’s often used in science and engineering classes at the graduate level, but it’s not so common at the undergraduate level. I find the students really like the discussion.
The difficulty with this kind of teaching is making sure you give the students enough of an opportunity to engage in the conversation, even when they make contributions that are inaccurate or go beyond the scope of the paper. It’s hard to resist interrupting, and eventually, you do have to straighten them out—you can’t let the students leave at the end of the class with a completely wrong impression about something. But having the freedom to say what’s on their minds is part of what students like about the course.
Facilitating a reading seminar is definitely an art, and you learn it over multiple iterations of the course. You can’t just start the very first year and be perfect at it—you have to learn the teaching style as well as the content. You have to observe what works and what doesn’t.
What doesn’t work, for example, is to ask an open-ended question and have nobody answer. It works better when you ask a specific question and the student can say, “Gee, I really didn’t think about that.” Because if the student says, “I didn’t think about that,” and then there’s a couple of seconds of awkward silence, another student will come into the discussion. And that’s what you want—to start getting that interaction.
|
common_crawl_ocw.mit.edu_96
|
Overview
How can we achieve a sustainable balance between food supply and demand in a diverse and changing world?
What do we need to consider in our search for practical ways to reconcile food supply and demand?
The readings and supplementary information provided in Section 2 make some points that we need to consider when looking for ways to reconcile food demand and supply:
- Demand reductions alone will probably not be sufficient to meet the nutritional needs of the 21st century global population. Food production will also need to increase. The amount of increase required will depend on uncertain changes in per capita demand and population.
- At the same time, agricultural technology and management practices will need to be more sustainable than they have been, so that increases in production can be maintained over the long term.
- The agricultural system will need to be more resilient so it is able to respond to climate change, disease, and uncertain trade policies.
- Food will need to be more accessible to populations that are vulnerable to malnutrition.
In this section we examine several topics that provide context for our subsequent discussions of strategies for meeting projected demand for food.
Class 5 considers the critical role of crop yield. The strategy of sustainable intensification, which we have already encountered, relies on finding environmentally acceptable ways to increase yield. This can be done i) by increasing potential yield, which is the yield that can be obtained for a particular crop under ideal conditions, or by ii) reducing sources of stress that cause actual yield to be below potential yield. The readings for this class provide good introductory discussions of the factors that both limit yield and contribute to high yield variability. Class 5 provides background for our later discussion, in Class 11, of yield increases in the Green Revolution and beyond.
Class 6 considers the effects of agriculture on the environment, with an emphasis on important ecosystem and nutrient cycle changes that could impact future production. The readings deal with depletion of water supplies, impacts of nutrient and pesticide application, soil degradation, and ecosystem changes. We consider these topics so we can properly evaluate the environmental impacts of increasing crop inputs such as irrigation water, fertilizer, and pesticides.
Class 7 takes a closer look at smallholder farmers, who produce and consume large fractions of the global food supply. This discussion is useful for assessing the challenge of improving food security for everyone, including the rural poor in developing regions. Class 8 considers climate changes of particular relevance to agriculture, with an emphasis on impacts that could affect crop yield, smallholder farmers, and pest control.
Class 9 discusses two new technologies, genetically engineered crops and precision agriculture, that could have significant impacts on crop production. Class 10 examines the roles of trade and optimization as strategies for making better use of limited and unevenly distributed resources. Trade has the effect of transferring land and water over time and space, reducing regional or seasonal imbalances between supply and demand.
Our readings on these topics, together with the Supporting Information, provide important background needed to evaluate the management options considered in Section 4.
Section 3 Class Topics
Class 5: Crop Yield
Class 6: Environmental Impacts of Agriculture: Protecting Natural Resources
Class 7: Smallholder Farming: Focus on Africa
Class 8: Climate Change and Agriculture
Class 9: New Technologies and Practices: Genetic Engineering, Precision Agriculture
Class 10: Trade and Optimal Resource Allocation
Section 3 Supporting information (SI)
S9. Soil Properties, Soil Suitability Measures, and Changes in Soil Quality
S10. Global and Regional Farm Characteristics
S11. World Greenhouse Gas Emissions: 2016
S12. Adoption of Genetically Engineered Crops in the USA
|
common_crawl_ocw.mit.edu_97
|
Farm Category, Area, and Production Distributions
Recent refereed publications have started to provide a more accurate picture of the global farm system. The information cited here is peer reviewed and should be reproducible from cited data sources. This is in contrast to statistics found in many reports and articles issued by international agencies, who sometimes do not identify their sources.
A good place to start in our survey of global farm characteristics is Figure S10.1, which shows two gridded maps that identify a) the percentage of pixel area devoted to crop production and b) a categorical estimate of crop field size. Both are derived from the same global land use data. Categorical field size estimates can be inferred from satellite data and have been shown to correlate with farm size. Details of the mapping procedure are described in Fritz et al. (2015)
The darker shaded 1 km pixels on the cropland percentage map indicate more densely cultivated regions. The four colors on the field size map show the characteristic crop field size (from “very small” to “large”) identified with a given pixel. The dark green areas in a) are the major food producing regions. The dark blue (large field size) areas in b) generally coincide with the major food exporting areas (North America, Australia, and parts of South America). The red (very small field size) areas in b) indicate regions where smallholder farms dominate (China, India, and sub-Saharan Africa).
Figure S10.1: Gridded 1km pixel maps of a) percentage of land in each pixel devoted to crop production
b) characteristic field size category for each pixel. (Fritz et al., 2015).
© John Wiley & Sons, Inc. All rights reserved. This content is excluded from our Creative Commons license.
For more information, see https://ocw.mit.edu/help/faq-fair-use/.
Figure S10.2 shows information from global land census surveys conducted in 1990 and 2000 in 111 countries and compiled by Lowder et al. (2015). Panel a) shows the fraction of farms in each of several size categories, based on data from 460 million farms in 111 countries. Since the survey is reasonably inclusive the paper assumes that the fractions in the chart apply to the global total, which is estimated to be at least 570 million farms. The chart indicates that 94% of global farms are smaller than 5 ha. Based on this and a conservative assumption of up to 4 people per farm household, we can estimate that there are up to 2 billion people on farms smaller than 5 ha (perhaps more). This is roughly consistent with uncited estimates given in UN documents.
Panel a) is a snapshot of the global farm size distribution from around 1990-2000. Farm size distributions and averages change over time. The trend plot in Panel b) shows that average farm sizes in high income countries (left scale) continually increased between 1960 and 2000 while farm sizes in other countries generally decreased. African countries are indicated by the dotted line (right scale). More recent data suggest that the trends in each region still have the same direction.
Figure S10.2 Global farm size category and area distributions and trends. (Lowder et al., 2015).
Courtesy Elsevier, Inc., http://www.sciencedirect.com. Used with permission.
Although farms over 5 ha only account for 6% of all global farms the land area they each contribute is large. To address this, the Lowder et al. paper also estimates the total cropland area in each size category. Cropland area provides a basis for comparing, for example, the relative contribution to production from smallholders, medium size farms, and large farms. The global cropland area distribution in Panel c) clearly indicates that small farms are more numerous (dark gray) but that larger farms contribute more total area (light gray). Farms smaller than 2 ha and 5 ha account, respectively, for 12% and 18% of global cropland area while farms larger than these thresholds account, respectively, for 88% and 82% of the global total area. We can expect global crop production contributions of small vs. large farms to roughly correlate with these global cropland area figures (see next section).
Figure S10.3 shows a similar plot with results for sub-Saharan Africa. Here larger farms contribute a smaller fraction of the regional total area (also, note the change in the horizontal scale). Farms smaller than 2 ha and 5 ha account, respectively, for 38% and 66% of sub-Saharan Africa cropland area while farms larger than these thresholds (but smaller than 50 ha) account, respectively, for the remaining 42% and 34% of area. This suggests that smallholder production is a major fraction of total domestic food production in sub-Saharan Africa.
Figure S11.3 Sub-Saharan farm size category and area distributions compared (Lowder et al., 2015).
Courtesy Elsevier, Inc., http://www.sciencedirect.com. Used with permission.
A recent study by Ricciardi et al. (2018) provides smallholder cropland area estimates that are somewhat higher than Lowder et al. and are based on a somewhat smaller number of country surveys. The Ricciardi et al. (2018) paper estimates that smallholder farms less than 2 ha occupy 24% of global cropland area (rather than 18%). It also indicates that these farms generate 28-31% of global crop production and 30-34% of global calorie production. These results confirm Lowder’s conclusion that smallholder farms contribute significantly to the global food supply. The Ricciardi et al. (2018) paper does not provide results for particular regions, such as sub-Saharan Africa but it does provide size category distributions for particular crops.
References:
Steffen Fritz, Linda See, et al. 2015. “Mapping Global Cropland and Field Size.” Global Change Biology, 21, no. 5: 1980–1992.
IFAD International Fund for Agricultural Development. 2010. “Rural Poverty Report 2011 - New Realities, New Challenges: New Opportunities for Tomorrow’s Generation.” Annexes 1 and 2. IFAD, Rome
Sarah K. Lowder, Jakob Skoet, and Terri Raney. 2016. “The Number, Size, and Distribution of Farms, Smallholder Farms, and Family Farms Worldwide.” World Development, 87, 16–29.
Vincent Ricciardi, Navin Ramankutty, et al. 2018. “How Much of the World’s Food Do Smallholders Produce?” Global Food Security, 17, 64–72.
|
common_crawl_ocw.mit.edu_98
|
Our analysis so far suggests that our water and land resources may not be sufficient to grow the additional food required to provide global food security throughout this century, at least with current technology and management practices. It seems quite possible that climate change may make things even more difficult. Also, there is no guarantee that it will be possible to meet food demands with sustainable intensification that raises yield on existing cropland through increased nutrient and pesticide inputs and expanded irrigation infrastructure. So it is natural to ask whether new technologies and the improved management practices they enable could be the answer.
This class looks at two promising and much discussed new technologies—agricultural biotechnology and precision agriculture. One of the primary goals of biotechnology in the agricultural sector is to improve plant capabilities for dealing with environmental stresses that limit yield. Of particular interest are stresses from pests and disease, and from extreme events such as heat waves, droughts, and floods. Other important goals include increasing potential (unstressed) crop yields, reducing pesticide use, and improving crop nutritional quality. It is worth mentioning that biotechnology also has a role in improving the quantity and quality of livestock products. This aspect is important but not discussed here.
The paper by Ronald (2011) provides a useful introduction to agricultural biotechnology. It focuses on examples of insect resistant, herbicide resistant, and viral resistant crops that are all being used in the field and it provides shorter discussions of possible future applications that go beyond pest control. The examples are illustrated and updated in Ronald’s TED video. The video by Jill Farrant provides an interesting example of ongoing research on drought resistance, which has yet to be put into practice. The optional paper by Ricroch & Hénard-Damave (2016) gives a sense of the range of genetically engineered agricultural products in development at the publication time. Hefferon and Herring (2017) discuss some new approaches in genomic technology.
The stated goals of agricultural genetic engineering are generally admirable but the means used to achieve these goals are controversial. Although some have raised concerns about human health impacts the most credible concerns are related to ecological aspects. Some examples are:
- New varieties of pests that are resistant to genetic innovation can evolve through natural selection. This is a problem that is also associated with the use of traditional chemical pesticides.
- Genetically engineered crops could be invasive, with related loss of biodiversity
- Genetically engineered crops could have adverse effects on non-target organisms and ecosystems, including soil microbiological communities
- New viruses with unknown properties could develop in transgenic viral-resistant plants.
The paper by Wolfenberger and Phifer (2000) reviews some of these concerns. Gilbert (2013) provides a journalistic look at resistance as well as some other ecological and social issues that have been raised by opponents of genetic engineering.
Concerns about genetic engineering were largely hypothetical and speculative when the Wolfenberger and Phifer (2000) paper was published. The situation has not changed much since then. The community that voices these concerns still has strong but thinly documented reservations about possible adverse environmental impacts but rarely mentions the possibility that modern biotechnology could provide additional food for millions. On the other hand, the community advocating genetic engineering barely mentions environmental concerns. There are still surprisingly few data-driven papers that address both sides of the issue. The controversy has become quite polarized, partly because of strict European limits on the development and use of genetic engineering in agriculture. The optional reading by Paarlberg (2010) provides an interesting policy perspective, considering differences between European and African food security needs. The reading by the US National Academy of Sciences (2016) summarizes an effort to build a consensus position.
Overall, biotechnology is a promising development for food security that has already had an impact on agricultural production (see S12). Much of the practical success to date has been in genetically engineered pest control techniques that improve on traditional pesticides but share the need to continually deal with acquired pest resistance. There is still much uncertainty about the longer-term effectiveness and environmental impacts of current genetic engineering technology.
Our second technology, precision agriculture, seems to have little downside, other than affordability. Precision agriculture technologies are designed to make crop production more efficient, with respect to the use of water, nutrients, pesticides, labor, capital, and other inputs. They do this by combining new high-resolution sensors, information technology, and cultivation equipment in an integrated package. Widespread adoption of precision agriculture methods could have positive environmental impacts if they reduce water use and undesirable off-farm losses of fertilizer and pesticides. The article by Gebbers and Adamchuk (2010) provides a concise overview of relevant technology while the paper by Bogue (2017) surveys some precision agriculture sensors and equipment in current use or in development. The Millennial Farmer video shows an example of a popular precision agriculture product on a large US farm.
© American Association for the Advancement of Science. All rights reserved.
This content is excluded from our Creative Commons license.
For more information, see https://ocw.mit.edu/help/faq-fair-use.
There are few comprehensive studies of the effects of precision agriculture on crop yield and farm revenue. Some of the equipment required is expensive so its advantages must be weighed in light of the capital investment required, especially for applications to small farms in developing countries. However, it seems likely that demand for precision agriculture technology will increase and prices will fall if this technology can be shown to significantly reduce crop inputs while also improving yield. To date, precision agriculture innovations have tended to be driven by corporate research and development but academic research can be expected to have an increasingly important role in sensor development and in applications of robotic, “big data,” and artificial intelligence technologies to the agricultural sector. The challenge will be to ensure that these developments will find their way to smallholders and poorer farmers.
Required Readings
Genetic Engineering and Food Security
- Pamela Ronald. 2011. “Plant Genetics, Sustainable Agriculture and Global Food Security.” Genetics, 188, no. 1: 11–20.
Ecological Risks of Genetically Engineered Crops
- L.L. Wolfenbarger and P. R. Phifer. 2000. “The Ecological Risks and Benefits of Genetically Engineered Plants.” Science, 290, no. 5499: 2088–2093.
- Natasha Gilbert. 2013. “Case Studies: A Hard Look at GM Crops.” Nature, 497, 1 May.
Summary Statement on Genetic Engineering
- US National Academy of Sciences. 2016. Genetically Engineered Crops: Experiences And Prospects, Executive Summary, The National Academies Press, Washington, DC.
Precision Agriculture
- Robin Gebbers and Viacheslav I. Adamchuk. 2010. “Precision Agriculture and Food Security.” Science, 327, no. 5967: 828–831.
- Robert Bogue. 2017. “Sensors Key to Advances in Precision Agriculture.” Sensor Review. Emerald Publishing Limited.
Optional Readings
Genetic Engineering
- Robert Paarlberg. 2010. “GMO Foods and Crops: Africa’s Choice.” New Biotechnology, 27, no. 5: 609–613.
- Kathleen L. Hefferon and Ronald J. Herring. 2017. “The End of the GMO? Genome Editing, Gene Drives and New Frontiers of Plant Technology.” Review of Agrarian Studies, 7, no. 1: 1–32.
- Agnès E Ricroch and Marie-Cécile Hénard-Damave. 2016. “Next Biotech Plants: New Traits, Crops, Developers and Technologies for Addressing Global Challenges.” Critical Reviews in Biotechnology, 36, no. 4: 675–690.
Videos
The Case for Genetically Engineered Food
- Pamela Ronald. “The Case for Genetically Engineering Our Food.” TED talk March 2015, published on YouTube.
Genetic Engineering for Drought Resistance
- Jill Farrant. “How We Can Make Crops Survive without Water.” TEDGlobal talk December 2015, published on YouTube.
Precision Agriculture Demonstration
- Millennial Farmer. “Quick Look at the Climate Corporation’s FieldView.” (a precision agriculture product), May 11, 2017, published on YouTube.
Discussion Points
- How would you compare the desirability and feasibility of meeting projected food demand by 1) reducing meat consumption and closing developing country yield gaps by expanding fertilizer use and irrigation vs. 2) replacing traditional crops with more robust higher yield genetically engineered crops?
- Do we really need genetically engineered crops?
- What kinds of precision agriculture products do you think would attract a market among small farmers in the developing world?
|
common_crawl_ocw.mit.edu_99
|
The three crop production systems described in the Section 4 Overview each include farms found in many different geographic regions. This makes it difficult to compile statistics that characterize each system as a whole. However, it is possible to identify a representative region or country in each production system so that the three systems can be conveniently compared. This is feasible because the differences across the systems are so significant.
Table S13.1 below lists a number of different demographic, agricultural, and socioeconomic indicators for the three production systems, based on the following representative countries or regions:
Commercial export-oriented farms: USA
Small high-input farms: China
Small low-input farms: sub-Saharan Africa
The statistics in the table apply ca. 2010 and are taken primarily from FAOSTAT, periodic USDA statistical reports (USDA ERS and NASS), IFAD (2010), Wu et al. (2018), and Lowder (2015).
Table S13.1
USA China Sub-Saharan Africa
Population
Total population (million): 300 1350 777
UN projected 2050 380 1410 2100
Population (million)
Agricultural population (million): 5.1 (1.7%) 842 (62%) 432 (56%)
Fertility (births per woman) 1.9 1.7 5.2
Land
FAO arable land (million ha)* 156 106 165
Fraction cropland on — 80% 66%
farms < 5ha
Fraction cropland on 91% – –
farms > 100 ha
Inputs
Cropland irrigated 14% 36% 2%
N fertilizer use (106 tonnes 12 (0.1) 31 (0.3) 0.5 (0.003)
and tonnes arable ha-1)
N fertilizer production (106 tonnes) 9 36 1
Cereal production
Cereal production 434 554 131
Cereal exports (106 tonnes) 64 (15%) 21 (4%) 2.7
Cereal imports (106 tonnes) 10 (2%) 1 (0.2%) 30 (23%)
Socioeconomic
GDP per capita $ 51,000 $ 1894 $ 632
Typical annual farm $ 60,000 — —
household incomes $200,000
Rural residents — 35% 87%
living on less than $2 per day:
Infant mortality 6 19 89
(per 1000 births)
Life expectancy (years) 79 75 51
Fraction undernourished 3% 9% 30%
* Temporary food and fodder crops
A review of these statistics suggest:
- The US and China have much lower birth rates than sub-Saharan Africa.
- US farms are much larger than in the other two regions and are operated by a much smaller fraction of the total population.
- The US and China have much more irrigated land and produce and consume much more fertilizer than sub-Saharan Africa.
- The US exports a significantly larger fraction of its cereal production than the other two regions.
- Sub-Saharan Africa is much more dependent on cereal imports than the other two regions.
- Sub-Saharan Africa farmers are much poorer and more subject to health problems than the other two regions.
- China has been able to achieve much better income and health results than sub-Saharan Africa, even though a larger fraction of its cropland is on small farms. Does this suggest that it could be possible for sub-Saharan agriculture to prosper with primarily small farms? Or should it emulate the US large farm model rather than China’s?
References
IFAD International Fund for Agricultural Development. 2010. “Rural Poverty Report 2011 - New Realities, New Challenges: New Opportunities for Tomorrow’s Generation.” Annexes 1 and 2. IFAD, Rome
Sarah K. Lowder, Jakob Skoet, Terri Raney. 2016. “The Number, Size, and Distribution of Farms, Smallholder Farms, and Family Farms Worldwide.” World Development, 87, 16–29.
USDA ERS and NASS. 2012. “Agricultural Resource Management Survey.” (Accessed 22 July, 2020).
Yiyun Wu, Xican Xi, et al. 2018. “Policy Distortions, Farm Size, and the Overuse of Agricultural Chemicals in China.” Proceedings of the National Academy of Sciences, 115, no. 27: 7010–7015.
|
common_crawl_ocw.mit.edu_100
|
In Class 11 we examine more closely the high input agricultural systems that now prevail in somewhat different forms in developed countries and a large part of the developing world. These systems, both the large and small farm versions, rely on modern high yield cultivars, extensive irrigation, and synthetic fertilizers. Our discussion builds on readings covered earlier, especially in Section 3, as well as new readings that focus on the feasibility of further increasing production with the high input model.
© Springer Nature. All rights reserved. This content is excluded from our Creative Commons license. For more information, see https://ocw.mit.edu/help/faq-fair-use.
In the mid-twentieth century the high input modern cultivar approach widely used in European and North American agriculture was extended, with considerable success, to Asia and parts of Latin America. This so-called “Green Revolution” was driven by selective breeding programs that developed hardier, higher yielding varieties of wheat and rice (see attached sketch from Khush (2001)). As the new varieties were adopted they increased cereal production and provided substantial improvements in food security. The short article by Hazell (2002) summarizes the history and accomplishments of this transformation.
The Green Revolution introduced environmental problems, as well as greater food security, to the countries where its methods were adopted. Many of these problems were related to the increased use of fertilizers, irrigation, and pesticides that were needed to fully realize the higher yield potentials of the new varieties. This increase in inputs, which was often not carefully controlled, led to the adverse environmental impacts now widely associated with modern agriculture and discussed in Class 6. The paper by Pimental and Pimental (1990) provides a brief summary. More details on relevant economic issues, environmental impacts, and farm consolidation are provided in a longer optional reading by Hazell (2010). The optional reading by Khush (2001) provides background on the genetic methods used to breed Green Revolution cultivars.
Green Revolution technology transformed much of smallholder agriculture in India and China from the small farm/low input model to the more productive small farm/high input model introduced in the Section 4 Overview. The transformation was most successful in areas with sufficient water to support irrigation systems and with ready access to fertilizers and locally appropriate seeds. Although efforts were made to introduce the Green Revolution to Africa, these were less successful, leaving a sizable and rapidly growing population with the lower yields, less reliable production, and poverty generally associated with low input smallholder farms. This aspect of the Green Revolution is briefly discussed in Carr (2001) in Class 7.
Does the success of the twentieth century Green Revolution imply that the best way to achieve global food security is to extend intensification more widely, perhaps by increasing production in the major exporting nations, but certainly by increasing it in Africa and the parts of Asian and Latin America that have been left behind? That will require resolution of the issues raised by Carr, including development of irrigation infrastructure and more ready access to inputs and markets.
The possibility of further intensifying the Green Revolution approach is addressed in the widely cited paper by Cassman (1999). He considers the possibility of raising cereal production by increasing potential (unstressed) yield (Class 5), controlling and improving soil quality (Class 6), and using precision agriculture to improve the efficiency of nitrogen application (Class 9). In each case, his conclusion is that small rather than revolutionary improvements are most likely. Cassman mentions that new genetic engineering methods could conceivably have a major impact if they can increase yield potential significantly beyond what has already been achieved with traditional selective breeding. However, the primary contribution of these methods so far has been to bring actual yields closer to existing potential yields by reducing stress, primarily from losses to pests.
Soil quality improvements could also be beneficial but require more than just additional nutrient application, as illustrated in the paper’s examples of recent yield declines in irrigated rice systems. Post-Green Revolution yield declines have also been observed in other settings (see Class 5). Although precision agriculture holds promise for improving the efficiency of nutrient and irrigation water application it focuses on bringing actual yields up to potential levels by reducing stress, rather than on increasing potential yield. The overall implication of Cassman’s analysis is that site-specific measures to close yield gaps could make substantive differences where actual yields are much lower than yield potential (e.g. in areas where low input smallholder agriculture still dominates). But there is relatively little reason to believe that there will be another round of dramatic increases in cereal yield potential by mid-century. This raises serious questions about the feasibility of feeding an additional 2 or 3 billion people with intensified Green Revolution methods.
In food security, as in climate change, it seems that the most realistic options are those that combine a number of measures that may have relatively small effect in themselves but may have significant impact when taken together. We return to this idea in Section 5, after a look at the Agroecology alternative to the Green Revolution.
Required Readings
Background on the Green Revolution
- Peter B. R. Hazell. 2002. “Green Revolution: Curse or Blessing? (PDF)” No. REP-9450. Washington, DC (USA), IFPRI. 3 p.
- D. Pimentel and M. Pimentel 1990. “Comment: Adverse Environmental Consequences of the Green Revolution.” Population and Development Review, 16, 329–332.
Possibilities of Further Extending the Green Revolution’s Intensification Approach
- Kenneth G. Cassman. 1999. “Ecological Intensification of Cereal Production Systems: Yield Potential, Soil Quality, and Precision Agriculture.” Proceedings of the National Academy of Sciences, 96, no. 11: 5952–5959.
Optional Reading
Green Revolution impacts
- Peter B.R. Hazell. 2010. “Asia’s Green Revolution: Past Achievements and Future Challenges.” Chapter 1.3 in Rice in the Global Economy. Strategic Research and Policy Issues for Food Security. S. Pandey, D. Byerlee, et al. (Eds). IRRI. Manila.
Genetic Background for the Green Revolution
- Gurdev S. Khush. 2001. “Green Revolution: the Way Forward.” Nature Reviews Genetics, 2, no. 10: 815–822.
Discussion Points
- Based on what you have read in class, is it likely that large-scale high input agriculture will spread throughout the areas where it is feasible and feed the rest of the world through exports? In this model, which is believed by some to be inevitable, small farms, either high or low input, will essentially disappear, replaced by a more industrialized global food system.
- Do you agree with Hazell and others who believe that the Green Revolution’s environmental problems were due to farmer limitations and policy failures rather than any intrinsic deficiency in Green Revolution methods?
- Although the Green Revolution may have its limitations, it seems to have been successful at increasing national production. Should we be looking for an alternative when considering the future or learn from the past and improve the basic model so production can be increased further and undesirable side effects minimized?
|
common_crawl_ocw.mit.edu_101
|
Course Description
This course is an overview of engineering approaches to protecting water quality with an emphasis on fundamental principals. Theory and conceptual design of systems for treating municipal wastewater and drinking water are discussed, as well as reactor theory, process kinetics, and models. Physical, chemical, and …
This course is an overview of engineering approaches to protecting water quality with an emphasis on fundamental principals. Theory and conceptual design of systems for treating municipal wastewater and drinking water are discussed, as well as reactor theory, process kinetics, and models. Physical, chemical, and biological processes are presented, including sedimentation, filtration, biological treatment, disinfection, and sludge processing. Finally, there is discussion of engineered and natural processes for wastewater treatment.
Course Info
Learning Resource Types
assignment_turned_in
Problem Sets with Solutions
grading
Exams with Solutions
notes
Lecture Notes
group_work
Projects with Examples
|
common_crawl_ocw.mit.edu_102
|
Course Meeting Times
Lectures: Two sessions / week, 1.5 hours / session
Instructor
Dr. Peter Shanahan
Content
The course examines the needs for water quality and how to achieve it by drinking water treatment, wastewater treatment, and other water-quality control strategies. The emphasis of the course is on principles and theory.
Grading
Regular attendance and class participation will be considered in assigning final grades.
Collaboration on homework is permitted - please list your collaborators. Homework solutions will be posted one week after graded homework is returned - you can submit corrected homework for re-grading prior to posting of solutions.
Reading
Readings are intended as a supplement to lectures and are probably most beneficial if completed before each lecture.
Reynolds, T. D., and P. A. Richards. Unit Operations and Processes in Environmental Engineering. 2nd ed. Boston, MA: PWS Publishing Company, 1996. ISBN: 0534948847.
References
Mara, D. Domestic Wastewater Treatment in Developing Countries. London, UK: Earthscan, 2003. ISBN: 1844070190.
Far more general than the title implies, this reference provides very clear descriptions of the characteristics of wastewater and the fundamentals of treatment.
Viessman, W., Jr., and M. J. Hammer. Water Supply and Pollution Control. 7th ed. Pearson Education, Inc., Upper Saddle River, NJ: Pearson Prentice Hall, 2005. ISBN: 0131409700.
Tchobanoglous, G., F. L. Burton, and H. D. Stensel. Wastewater Engineering: Treatment and Reuse. 4th ed. Metcalf and Eddy Inc., New York, NY: McGraw-Hill, 2003. ISBN: 0070418780.
MWH Staff. Water Treatment: Principles and Design. 2nd ed. New York, NY: Wiley, 2005. ISBN: 0471110183.
|
common_crawl_ocw.mit.edu_103
|
Course Meeting Times
Lectures: 3 sessions / week, 1 hour / session
Recitations: 1 session / week, 1 hour / session
Texts
Required
Incropera, Frank P., and David P. DeWitt. Fundamentals of Heat and Mass Transfer. 5th ed. New York, NY: John Wiley & Sons, 2001. ISBN: 9780471386506. Including software tools, users’ guides and associated CD.
Additional References
Bird, R. Byron, Warren E. Stewart, and Edwin N. Lightfoot. Transport Phenomena. New York, NY: John Wiley & Sons, 1960. Also 2nd ed. 2001. ISBN: 9780471410775.
Cussler, E. L. Diffusion: Mass Transfer in Fluid Systems. 2nd ed. Cambridge, UK: Cambridge University Press, 1997. ISBN: 9780521564779.
Kreith, Frank, and Mark S. Bohn. Principles of Heat Transfer. 6th ed. Pacific Grove, CA: Brooks/Cole, 2000. ISBN: 9780534375966.
Middleman, S. An Introduction to Mass and Heat Transfer. New York, NY: John Wiley & Sons, 1998. ISBN: 9780471255369.
Holman, J. P. Heat Transfer. 8th ed. New York, NY: McGraw-Hill, 1996. ISBN: 9780078447853.
Mills, Anthony F. Heat and Mass Transfer. Chicago, IL: Irwin, 1994. ISBN: 9780256114430.
Modest, M. F. Radiative Heat Transfer. Burlington, MA: Academic Press, 2003. ISBN: 9780125031639.
Welty, J. R., C. E. Wicks, R. E. Wilson, and G. Rorrer. Fundamentals of Momentum, Heat, and Mass Transfer. 4th ed. New York, NY: John Wiley & Sons, 2000. ISBN: 9780471381495.
Assignments
Reading assignments are listed on the syllabus. Additional reading assignments may occasionally be announced in class.
Homework assignments will be given out on Wednesdays and will be due at the beginning of class the next Wednesday, unless otherwise specified. No late homework will be accepted.
Recitations
Recitation sections will be devoted to working example problems. Problems will be handed out in class the Wednesday before recitation. You should work the problems to the best of your ability before coming to class on Tuesday. Students will be called on in recitation to work problems. There will be a ten minute closed book quiz during certain recitations. The problems will be drawn from the material contained in the lectures, text readings, or the homework. Example problems scheduled to be covered that day are likely to be emphasized. We will strive to emphasize concepts rather than mathematical details, but the problems may be quantitative. The lowest quiz grade will be dropped. There will be no make-up quizzes.
Labs and Exams
There is a laboratory component to the course consisting of two experiments: a conduction and a heat exchanger experiment. Additional details on the labs can be found on the Heat Transfer Project website. There are three one hour exams and a final exam.
Grading
10.302 Policy on Collaboration
The fundamental principle of academic integrity is that you must fairly represent the authorship of the intellectual content of the work you submit for credit. In the context of 10.302, this means that if you consult with others (such as fellow students, TA’s, faculty) in the process of completing homework, you must acknowledge their contribution in any way that reflects their true ownership of the ideas and methods you borrowed.
Discussion among students to understand the homework problems or to prepare for laboratories or quizzes is encouraged. Copies of previous year’s problems and quizzes (“bibles”) will be made available and are considered useful in the educational process. Collaboration on homework is allowed as long as all references (both literature and people) used are named clearly at the end of the assignment. Word-by-word copies of someone else’s solution or parts of a solution handed in for credit will be considered cheating unless there is a reference to the source for any part of the work which was copied verbatim. Failure to cite other student’s contribution to your homework solution will be considered cheating. Official Institute policy regarding academic honesty can be found in the MIT Bulletin Course and Degrees Issue under “Academic Procedures and Institute Regulations.”
Study groups are considered an educationally beneficial activity. However, at the end of each problem on which you collaborated with another student you must cite the students and the interaction. The purpose of this is to acknowledge their contribution to your work. Some examples follow:
-
You discuss concepts, approaches and methods that could be applied to a homework problem before either of you start your written solution. This process is encouraged. You are not required to make a written acknowledgment of this type of interaction.
-
After working on a problem independently, you compare answers with another student, which confirms your solution. You should acknowledge that the other student’s solution was used to check your own. No credit will be lost if the solutions are correct and the acknowledgments is made.
-
After working on a problem independently, you compare answers with another student, which alerts you to an error in your own work. You should state at the end of the problem that you corrected your error on the basis of checking answers with the other student. No credit will be lost if the solution is correct and the acknowledgment is made, and no direct copying of the correct solution is involved.
-
You and another student work through a problem together exchanging ideas as the solution progresses. Each of you should state at the end of the problem that you worked jointly. No credit will be lost if the solutions are correct and the acknowledgment is made.
-
You copy all or part of a solution from a reference such as a textbook or a “bible.” You should cite the reference. Partial credit will be given, since there is some educational value in reading and understanding the solution. However, this practice is strongly discouraged, and should be used only when you are unable to solve the problem without assistance.
-
You copy verbatim all or part of a solution from another student. This process is prohibited. You will receive no credit for verbatim copying from another student when you have not made any intellectual contribution to the work you are both submitting for credit.
-
Verbatim copying of any material which you submit for credit without reference to the source is considered to be academically dishonest.
|
common_crawl_ocw.mit.edu_104
|
Course Meeting Times
Lectures: 3 sessions / week, 1 hour / session
Purposes of This Course
- To ensure that you are aware of the wide range of easily accessible numerical methods that will be useful in your thesis research, at practice school, and in your career, as well as to make you confident to look up additional methods when you need them.
- To help you become familiar with MATLAB and other convenient numerical software, and with simple programming / debugging techniques.
- To give you some understanding of how the numerical algorithms work, to help you understand why algorithms sometimes produce unexpected results.
Grading
Homework Policy
Doing the homework is the best way to learn and you are encouraged to discuss the homework with the TAs, the professors, and other students in the class. It is fine to ask someone else to look over your shoulder to help you debug your programs, or so that person can see how you accomplished some task. However, You May Not:
- Post or provide any other student with an electronic or hard copy of any portion of your homework solution prior to the due date, or
- accept such a copy from another student, or
- accept, read, or use any homeworks from previous years (quizzes are fine).
It is acceptable to incorporate functions from other sources (e.g., from the previous week’s posted homework solution); as long as the author / source of a function being recycled is properly credited in the homework. Violations of these policies will be considered cheating and can have very severe consequences.
When to Stop
Sometimes you may find a homework problem is consuming an inordinate amount of time even after you have asked for help. (This is an occupational hazard for all software developers.) If this happens, just turn in what you have done with a note indicating that you know your solution is incomplete. This course nominally requires 9 hours per week on average—perhaps a little more early on if you are not proficient with MATLAB.
Using MATLAB
Install this program as soon as possible (if not already installed). There will be MATLAB tutorial help sessions for any students who have not used this program before or who need a refresher. You are also encouraged to go through any of the numerous tutorials provided by Mathworks.
Reading Materials
Required textbook: Beers, Kenneth J. Numerical Methods for Chemical Engineering: Applications in MATLAB. Cambridge University Press, 2006. ISBN: 9780521859714. [Preview with Google Books]
You are expected to read the course materials before class, and to read the materials again before doing homework. Some reference books that may be helpful:
Press, W. H. Numerical Recipes 3rd Edition: The Art of Scientific Computing. Cambridge University Press, 2007. ISBN: 9780521880688. [Preview with Google Books] (This comes in various editions) — This book provides short clear synopses of methods for many types of problems.
Rektenwald, G. Numerical Methods with MATLAB: Implementations and Applications. Pearson, 2000. ISBN: 9780201308600 — This book provides only simple numerical methods, but is good introdution to using MATLAB.
Heath, Michael T. Scientific Computing. The McGraw-Hill Companies, Incorporation, 2002. ISBN: 9780072399103 — This book has more concise coverage of many topics.
There are also a very large number of textbooks on numerical methods for engineers, many of which have helpful examples implemented in MATLAB.
|
common_crawl_ocw.mit.edu_105
|
Course Description
This course aims to connect the principles, concepts, and laws/postulates of classical and statistical thermodynamics to applications that require quantitative knowledge of thermodynamic properties from a macroscopic to a molecular level. It covers their basic postulates of classical thermodynamics and their …
This course aims to connect the principles, concepts, and laws/postulates of classical and statistical thermodynamics to applications that require quantitative knowledge of thermodynamic properties from a macroscopic to a molecular level. It covers their basic postulates of classical thermodynamics and their application to transient open and closed systems, criteria of stability and equilibria, as well as constitutive property models of pure materials and mixtures emphasizing molecular-level effects using the formalism of statistical mechanics. Phase and chemical equilibria of multicomponent systems are covered. Applications are emphasized through extensive problem work relating to practical cases.
Course Info
Instructors
Departments
Learning Resource Types
assignment
Problem Sets
grading
Exams
|
common_crawl_ocw.mit.edu_106
|
Course Meeting Times
Lectures: 2 sessions / week, 2 hours / session
Vision
The goals of 10.40 are to connect the principles, concepts, and laws/postulates of classical and statistical thermodynamics to applications that require quantitative knowledge of thermodynamic properties from a macroscopic to a molecular level.
Approach
Focus on learning rather than grades. As we are revisiting the core area of thermodynamics now is the time to really gain understanding of key concepts and to bring your problem solving skills to a higher level.
- Your outside preparation: Read assigned material before class. Balance and prioritize your efforts on reading the text and other supplementary references, homework, and review for exams.
- Classtime: Overviews and summaries of topics combined with discussion of problem solving approaches. Interactive format with discussion and inquiry emphasized.
- Homework: Look over problems early. Consider alternative approaches with your classmates, but work out the complete solution individually.
- Exams: Understanding concepts and applying them to solving problems is the key to future success not the individual scores on your tests.
Prerequisites
Thermodynamics and Kinetics (5.60)
Chemical Engineering Thermodynamics (10.213)
Text
Tester, Jefferson W., and Michael Modell. Thermodynamics and its Applications. Upper Saddle River, NJ: Prentice Hall, 1996. ISBN: 9780139153563.
Homework and Exams
Two exams, eleven problem sets, and a final exam are scheduled for the course. The exams will be 2 hours long taken in class and the final will be 3 hours long in a take-home format. Your two exam scores and grade on the final exam will each count equally for a total of 60% of the course grade. Homework will count 30% and participation in class discussions 10%. Discussions with the instructors, teaching fellows, and teaching assistants of approaches to solving homework problems are encouraged. While students are welcome to also discuss problem solving strategies with each other, each student is expected to work independently in arriving at and documenting his or her final solution to submit.
|
common_crawl_ocw.mit.edu_107
|
The design project is the opportunity to integrate the knowledge and skills acquired over the undergraduate years in the Chemical Engineering department. As a team, you will develop a solution to a real design problem.
Progress Reports
The progress reports consist of a meeting with the instructor with a one page summary of progress on design project. The weekly progress report should be signed by all group members. (The meeting times will be assigned to avoid class conflicts so that all team members can be present.) Active participation during the meetings is encouraged, because the subjective evaluation by the instructor and team mates is partially based on these meetings.
Report Guidelines
The report should be typewritten, although it may be handwritten in ink, if done neatly.
For the Design Report grade, we will not be able to double check your numbers, but we can decide if we believe in them by examining your report. Allow time to write this report. It pays to start early because writing the report often exposes oversights and inconsistencies which may be easily corrected if time exists.
The grade will be the same for all the group members.
No late reports will be accepted.
Read carefully the Design Report Handout (PDF)
Presentation Guidelines
Presentations to class and clients should last 30 minutes per group.
Evaluation of Team Members
After you submit the report, you will be able to give us your input on the performance of your group members by filling out an evaluation form. This evaluation will be held in confidence, and it will be taken into account for your subjective evaluation.
Some Caveats
This class is about process design. One of the most important features of solutions to design problems is that the answers are typically not unique - usually there is no ‘correct answer’ nor is the answer in the back of the book. The design process is further complicated by missing or conflicting information, ambiguous objectives, and no clear problem statement. As a result:
- You will have to make a lot of assumptions, document them, understand the implications and then move on.
- Keep to the time limit imposed by the units allocated to this class - the goal is to do the best that you can in the allocated time, not to exhaust yourself. Time management is critical.
- This is a team effort, plan and allocate the work across all members, meet regularly and iterate the design concepts.
2006 Projects
The design problems are different for each team. Each team is sworn to secrecy about its project so that the solutions will be unique and original. The problem topics are listed below along with a brief memorandum.
|
common_crawl_ocw.mit.edu_108
|
Course Description
In the ICE-Topics courses, various chemical engineering problems are presented and analyzed in an industrial context. Emphasis is on the integration of fundamentals with material property estimation, process control, product development, and computer simulation. Integration of societal issues, such as engineering …
In the ICE-Topics courses, various chemical engineering problems are presented and analyzed in an industrial context. Emphasis is on the integration of fundamentals with material property estimation, process control, product development, and computer simulation. Integration of societal issues, such as engineering ethics, environmental and safety considerations, and impact of technology on society are addressed in the context of case studies.
The broad context for this ICE-Topics module is the commonsense notion that, when designing something, one should plan for the off-normal conditions that may occur. A continuous process is conceived and designed as a steady-state operation. However, the process must start up, shut down, and operate in the event of disturbances, and so the time-varying behavior of the process should not be neglected. It is helpful to consider the operability of a process early in the design, when alternatives are still being compared. In this module, we will examine some tools that will help to evaluate the operability of the candidate process at the preliminary design stage, before substantial effort has been invested.
|
common_crawl_ocw.mit.edu_109
|
Course Description
Studies synthesis of polymeric materials, emphasizing interrelationships of chemical pathways, process conditions, and microarchitecture of molecules produced. Chemical pathways include traditional approaches such as anionic polymerization, radical condensation, and ring-opening polymerizations. Other techniques are …
Studies synthesis of polymeric materials, emphasizing interrelationships of chemical pathways, process conditions, and microarchitecture of molecules produced. Chemical pathways include traditional approaches such as anionic polymerization, radical condensation, and ring-opening polymerizations. Other techniques are discussed, including stable free radical polymerizations and atom transfer free radical polymerizations (ARTP), catalytic approaches to well-defined architectures, and polymer functionalization in bulk and at surfaces. Process conditions include bulk, solution, emulsion, suspension, gas phase, and batch vs. continuous fluidized bed. Microarchitecture includes tacticity, molecular-weight distribution, sequence distributions in copolymers, errors in chains such as branches, head-to-head addition, and peroxide incorporation.
Acknowledgements
The instructor would like to thank Karen Shu and Karen Daniel for their work in preparing material for this course site.
|
common_crawl_ocw.mit.edu_110
|
These notes are preliminary and may contain errors. SES # TOPICS LECTURE NOTES 1 Course Overview Polymer Design and Synthesis Reaction Types and Processes Introduction to Step Growth (PDF) Step Growth Polymerization 2 Molecular Weight (MW) Control Molecular Weight Distribution (MWD) in Equilibrium Step Condensation Polymerizations Interchange Reactions: Effects on Processing and Product Application Example: Common Polyesters (PDF) 3 Step Growth Polymerization Types of Monomers Kinetics and Equilibrium Considerations Closed vs. Open Systems (PDF) 4 Common Processing Approaches Near-equilibrium vs. Far from Equilibrium Homogeneous Solution and Bulk Polymerization (PDF) 5 Interfacial Polymerizations Application Examples: Polyamides (PDF) 6 Other Polymers of Interest Obtained by Step-Growth Polyaramids Polyimides Segmented and Block Copolymers from Step Condensation Methods (PDF) 7 Crosslinking and Branching Network Formation and Gelation Carothers Equation: Pn Approach (PDF) 8 Network Formation Statistical Approach: Pw Approach A Word on MWD for Nonlinear Polymerizations (PDF) 9 Step-by-Step Approaches I: Polypeptide Synthesis: Examples from Biology Step-by-Step Approaches II: Dendrimers, Traditional Convergent and Divergent Routes New “one-pot” Approaches to Hyperbranced Species (PDF) Free Radical Chain Polymerization 10 Introduction to Radical Polymerization (PDF) 11 Radical Polymerization Homogeneous Reaction Rate Kinetics (PDF) 12 Free Radical Kinetic Chain Length MWD Chain Transfer Energetics (PDF) 13 Thermodynamics of Free Radical Polymerizations Ceiling T’s Tromsdorff Effect Instantaneous Pn (PDF) 14 Processing Approaches: Emulsion Polymerization Processes (PDF) 15 Processing Approaches: Suspension (Bead) Polymerization Processes Polyvinyl Chloride Via Precipitation Polymerization Polyethylene Via Radical Polymerization (PDF) 16 Ziegler-Natta Catalysis Stereochemistry of Polymers (PDF) 17 Stereoregular Polymerizations (PDF) 18 Radical Copolymerization: Alternating to Block Copolymers (PDF) Ionic Polymerization 19 Metallocene Chemistry Introduction to New Developments from Brookhart, et al. (PDF) 20 Introduction to Anionic Polymerization Monomers Applicable to Anionic Methods Kinetics of “Nonliving” Anionic Polymerization (PDF) 21 Living Anionic Polymerization Effects of Initiator and Solvent (PDF) 22 Anionic Block Copolymerization (PDF) 23 Anionic Ring Opening Polymerization End Group Functionalization Telechelic Oligomers and Novel Architectures Using Coupling Techniques (PDF) 24 Introduction to Cationic Polymerization Monomers Kinetics (PDF) 25 “Living” Cationic Polymerizations Examples of Cationic Polymerization Isobutyl Rubber Synthesis Polyvinyl Ethers (PDF) 26 Anionic Ring Opening Polymerization Cationic Ring Opening Polymerization Other Ring Opening Polymerizations (PDF) 27 Polysiloxanes, Lactams, etc. (PDF) Polymer Functionalization and Modification 28 Introduction to Polymer Functionalization: Motivations, Yield, Crystallinity, Solubility Issues Common Functionalization Approaches (PDF) 29 Functionalization Case Studies: Biomaterials Systems, Liquid Crystal (LC) Polymers (PDF) Less Traditional Approaches to Polymer Synthesis 30 Surface Functionalization of Polymers, Graft Copolymerization Approaches to Making Comb and Graft Architectures Grafting onto Existing Polymer Surfaces Surface Engineering Using Graft Copolymers (PDF) 31 “Living” Free Radical Approaches: Stable Free Radical Polymerization Atom Transfer Radical Polymerization (ATRP) (PDF) 32 ATRP RAFT and Other New Methods Ring Opening Metathesis Polymerization (ROMP) (PDF) 33 ROMP Oxidative Coupling Electrochemical Polymerizations Case Study: Electro-active Polymers (PDF) 34 Inorganic Polymer Synthesis (PDF) 35 Macromolecular Systems Via Secondary Bonding: Use of H-bonding and Ionic Charge to Build Structures Concept of Self-Assembly - From Primary Structure to Complex Structure (PDF)
|
common_crawl_ocw.mit.edu_111
|
SES # TOPICS READINGS Part 1: What is Urban Design and Development? - Translating Values into Design 1 Introduction Questions of the day: What is urban design? What is urban development? How are they connected and how do they affect our lives? 2 Ways of Seeing the City Questions of the day: What are the visible signs of change in cities? How can we measure the form of cities? How do the underlying values of the observer influence what is observed? Clay, Grady. “Epitome Districts.” In Close-Up: How to Read the American City. Chicago, IL: University of Chicago Press, 1980, pp. 38-65. ISBN: 0226109453. Jacobs, Allan. “Clues,” and “Seeing Change.” In Looking At Cities. Cambridge, MA: Harvard University Press, 1985, pp. 30-83 and 99-107. ISBN: 0674538919. Stilgoe, John. “Beginnings,” and “Endings.” In Outside Lies Magic. New York, NY: Walker and Co., 1998, pp. 1-19 and 179-187. ISBN: 0802713408. Part 2: The American City - The Forces That Shape Our Cities 3 The Forces That Made Boston Questions of the day: What does the history of Boston’s development tell us about the issues facing the city today? Are these forces common to all cities? Krieger, Alex. “Past Futures: Boston - Visionary Plans and Practical Visions.” Places 5, no. 3 (1989): 56-71. Lynch, Kevin. The Image of the City. Cambridge, MA: MIT Press, 1960, pp. 1-25. ISBN: 9780262620017. Seasholes, Nancy S. “Back Bay and South End.” In Gaining Ground: A History of Landmaking in Boston. Cambridge, MA: MIT Press, 2003, pp. 152-209. ISBN: 9780262194945. 4 Walking Tour of Boston Meet at the Government Center T-Stop (outside in front of the City Hall) at 8:00 am. For those students who can’t join the tour until 10:30 - we will be in the Skywalk of the Prudential Center Tower (800 Boylston Street between Exeter and Gloucester Streets) at approximately 10:30 am. We will end the tour at noon at South Station Quincy Market where you can have lunch and/or catch a train back to MIT. Whitehill, Walter Muir, and Lawrence W. Kennedy. Boston: A Topographical History. Cambridge, MA: Belknap Press of Harvard University Press, 1959, Revised in 1968 and in 2000. ISBN: 0674002687. Campbell, Robert. “After the Big Dig, the Big Question: Where’s the Vision?” Boston Globe, May 26, 2002. ———. “Beyond the Big Dig National Panel Recommendations.” Boston Globe, May 30, 2002. Review Boston.com’s “Beyond the Big Dig” prior to the walking tour. 5 The Design of American Cities Questions of the day: What can you tell about a city’s origins from its founders? What is the difference between agrarian settlements and industrial cities? What happened to cities as America industrialized? Morris, Anthony E. J. “Urban USA.” In History of Urban Form. London, UK: George Godwin, 1979, pp. 254-289. ISBN: 0711455120. Chudacoff, Howard P., and Judith E. Smith. “Urban America in the Colonial Age, 1600-1776.” In The Evolution of American Urban Society. Englewood Cliffs, NJ: Prentice-Hall, 1981, pp. 1-35. ISBN: 0132936054. 6 The Industrial City and Its Critics Questions of the day: What were nineteenth century and early twentieth century housing and workplace reformers trying to reform? Do we still have company towns? Hall, Peter. “The City of Dreadful Night.” In Cities of Tomorrow. Oxford, UK: Blackwell, 1988, pp. 14-46. ISBN: 0631134441. Crawford, Margaret. “Textile Landscapes: 1790-1850,” and “The Company Town in an Era of Industrial Expansion.” In Building the Workingman’s Paradise: The Design of American Company Towns. New York, NY: Verso, 1995, pp. 11-45. ISBN: 0860916952. In-class video: Excerpt from The Workplace, on Lowell, Massachusetts and Pullman, Illinois. 7 Development Controls Part I: The Institutionalization of Planning and Zoning Question of the day: Can we design cities without designing buildings? How can zoning and other design controls improve our public space? Lewis, Roger K. “The Powers and Pitfalls of Zoning,” and “From Zoning to Master Planning… and Back.” In Shaping the City. Washington, DC: AIA Press, 1987, pp. 274-281. ISBN: 0913962880. Barnett, Jonathan. “Zoning, Mapping, and Urban Renewal As Urban Design Techniques,” and “Designing Cities Without Designing Buildings.” In An Introduction to Urban Design. New York, NY: Harper and Row, 1982, pp. 57- 97. ISBN: 0064301141. “Citizen’s Guide to Zoning for Boston.” Boston, MA: Boston Redevelopment Authority, pp. 1-24. Babcock, Richard. “The Stage - Historical and Current,” and “The Purpose of Zoning.” In The Zoning Game. Madison, WI: University of Wisconsin Press, 1966, pp. 3-18 and 115-125. ISBN: 0299040941. 8 Development Controls Part II: Beyond Zoning: Urban Design Guidelines, Design Review and Development Incentives Questions of the day: What is the relationship between development incentives and quality public space? Can urban design guidelines and design review ensure good urban design? What are the newest development controls used by planners? Whyte, William H. “The Rise and Fall of Incentive Zoning.” In City: Rediscovering the Center. New York, NY: Doubleday, 1988, pp. 229-55. ISBN: 0385054580. Scheer, Brenda Case. “Introduction: The Debate on Design Review.” In Design Review: Challenging Urban Aesthetic Control. Edited by Brenda Case Scheer and Wolfgang F. E. Preiser. New York, NY: Chapman & Hall, 1994, pp. 1-10. ISBN: 0412991616. Jaffe, Martin. “Performance Zoning - A Reassessment.” Land Use Law and Zoning Digest 45, no. 3 (1993): 3-9. In-class video: Whyte, William H. The Social Life of Small Urban Spaces, selections. VHS. New York, NY: Municipal Society of Art, 1984. Part 3: Changing Cities by Designing New Ones 9 Three Urban Utopias: - Ebenezer Howard’s Garden City - Le Corbusier’s Radiant City - Frank Lloyd Wright’s Broadacre City Questions of the day: What assumptions does each thinker make about how people should live in cities? What beliefs does each hold about the relationship between city design and social change? What aspects of these “utopias” have actually come to pass? Howard, Ebenezer. “Introduction,” and “The Town-Country Magnet.” In Garden Cities of To-Morrow. Cambridge, MA: MIT Press, 1965, pp. 41-57. ISBN: 9780262580021. Wright, Frank Lloyd. “Broadacre City.” In Truth Against the World: Frank Lloyd Wright Speaks for an Organic Architecture. New York, NY: Wiley and Sons, 1987, pp. 351-361. ISBN: 0471845094. Corbusier, Le. The City of To-morrow and Its Planning. New York, NY: Dover, 1987, pp. 232-247 and 275-288. ISBN: 0486253325. 10 New Towns in the United States and Abroad Question of the day: What motivates planners to design new towns? Stein, Clarence. “Radburn, New Jersey.” In Toward New Towns for America . Cambridge, MA: MIT Press, 1966. ISBN: 9780262690096. Birch, Eugenie L. “Five Generations of the Garden City.” In From Garden City to Green City. Edited by Kermit C. Parsons and David Schuyler. Baltimore, MD: Johns Hopkins University Press, 2002, pp. 171-200. ISBN: 0801869447. Explore the website for the Las Vegas, Nevada community of Summerlin. Part 4: Changing Cities by Extending Them - Designing Suburbs and Regions 11 The Suburbs Part I: The Origins and Growth of Suburbs Questions of the day: Why do we have suburbs? How and why do the designs of new suburbs differ from the designs of older ones? Jackson, Kenneth. “Introduction,” “The Transportation Revolution and the Erosion of the Walking City,” and “Affordable Homes for the Common Man.” In Crabgrass Frontier: The Suburbanization of the United States. New York, NY: Oxford University Press, 1985, pp. 3-11, 20-44, and 116-137. ISBN: 0195049837. Fishman, Robert. “The Post-War American Suburb: A New Form, A New City.” In Two Centuries of American Planning. Edited by Daniel Schaffer. Baltimore, MD: Johns Hopkins University Press, 1988, pp. 265-278. ISBN: 0801837197. Stilgoe, John. “Intellectual & Practical Beginnings.” In Borderland: Origins of the American Suburb, 1820-1939. New Haven, CT: Yale University Press, 1988, pp. 21-64. ISBN: 0300042574. 12 The Suburbs Part II: Rethinking American Suburbs Questions of the day: How do “urbanism” and “uuburbanism” differ as “ways of life”? What is the appeal of small town life, and can this be designed? Southworth, Michael, and Peter M. Owens. “The Evolving Metropolis: Studies of Community, Neighborhood and Street Form at the Urban Edge.” Journal of the American Planning Association 59, no. 3 (1993): 271-287. Gans, Herbert. “Urbanism and Suburbanism as Ways of Life: A Re-evaluation of Definitions.” In People, Plans and Policies: Essays on Poverty, Racism, and Other National Urban Problems. New York, NY: Columbia University Press, 1994. ISBN: 0231074034. Krier, Leon. “Town and Country,” “Critique of Zoning,” “Critique of Industrialisation,” “The Idea of Reconstruction,” and “Urban Components.” In Leon Krier: Houses, Palaces, Cities. New York, NY: St. Martin’s Press, 1985, pp. 30-42. ISBN: 0312479905. Hayden, Dolores. “Nostalgia and Futurism.” In Building Suburbia: Green Fields and Urban Growth, 1820-2000. New York, NY: Vintage Books, 2004, pp. 201-229. ISBN: 0375727213. “Bye Bye Suburban Dream.” Newsweek, May 15, 1994. 13 Shaping Private Development/Growth Management Questions of the day: What are the social consequences of sprawl? Can private development be controlled to manage growth on the regional scale? What are the current techniques used to manage growth? Guest speaker: Westwood, MA town officials and Cabot, Cabot & Forbes representative - developers for new TOD in former industrial park along the Westwood commuter rail line. Flint, Anthony. “Instant Suburb: the Growth of the Suburban Belt along the Interstate 495 Corridor Is Fast, Intense, and Largely Unplanned. Is It Good for Community? Hopkinton Takes Stock.” Boston Globe Magazine , June 16, 2002. Calthorpe, Peter, and William Fulton. “Designing the Region.” Chapter 6 in The Regional City: Planning for the End of Sprawl. Washington, DC: Island Press, 2001, Introduction, Conclusion, pp. 107-171 and 271-277. ISBN: 1559637846. Gillham, Oliver. “Outlining the Debate.” In The Limitless City: A Primer on the Urban Sprawl Debate. Washington, DC: Island Press, 2002, pp. 69-81. ISBN: 1559638338. 14 Midterm Exam Part 5: Changing Cities by Redesigning Their Centers 15 Urban Renewal and Its Critics Questions of the day: When does a “neighborhood” become a “slum”? How does one achieve a balance between “renewal” and “preservation”? Gans, Herbert. “The West End: An Urban Village.” In The Urban Villagers: Group and Class in the Life of Italian-Americans. New York, NY: Free Press, 1962, pp. 3-16. ISBN: 0029112400. Jacobs, Jane. The Death and Life of Great American Cities. New York, NY: Vintage Books, 1961, pp. 3-25. ISBN: 067974195X. Mumford, Lewis. “Home Remedies for Urban Cancer.” In The Urban Prospect. New York, NY: Harcourt Brace Jovanovich, 1968, pp. 182-207. ISBN: 0151931909. Gans, Herbert. “Urban Vitality and the Fallacy of Physical Determinism” (Review of Jane Jacobs’ book). In People, Plans and Policies: Essays on Poverty, Racism, and Other National Urban Problems. New York, NY: Columbia University Press, 1994. ISBN: 0231074034. In-class video: The West End. 16 The Tumult of American Public Housing Question of the day: What does urban design have to do with the problems of American public housing? Guest speaker: Professor Lawrence J. Vale Franck, Karen A., and Michael Mostoller. “From Courts to Open Space to Streets: Changes in the Site Design of U.S. Public Housing.” Journal of Architectural and Planning Research 12, no. 3 (1995): 186-220. Newman, Oscar. “Housing Design and the Control of Behavior.” In Community of Interest. New York, NY: Doubleday & Company, 1980, pp. 48-77. ISBN: 0385111231. Vale, Lawrence J. “From the Puritans to the Projects: The Ideological Origins of American Public Housing.” Harvard Design Magazine 8 (1999): 52-57. McKee, Bradford. “Public Housing’s Last Hope.” Architecture 86, no. 8 (1997): 94-105. 17 Cultural Districts, Heritage Areas and Tourism: If You Name It, Will They Come? Question of the day: How can urban designers, developers and planners create new economic value for historic places and the inner city? Frenchman, Dennis. “Narrative Places and the New Practice of Urban Design.” In Imaging the City: Continuing Struggles and New Directions. Edited by Lawrence J. Vale and Sam Bass Warner, Jr. New Brunswick, NJ: Center for Urban Policy Research, 2001. ISBN: 0882851691. Skim “Executive Summary of the Master Plan for the Worcester Arts District, Community Partners Consultants, Inc., 2002.” (PDF - 8.7 MB) 18 Discussion of Exercise 2 19 Downtown Development and the Privatization of Public Space Question of the day: Is ‘Public Space’ being ‘Privatized’? Frieden, Bernard, and Lynne Sagalyn. Downtown, Inc.: How America Rebuilds Cities. Cambridge, MA: MIT Press, 1989, pp. 1-13 and 215-238. ISBN: 0262560593. Rybczynski, Witold. “The New Downtowns.” The Atlantic Monthly 271, no. 5 (1993): 98-106. Robertson, Kent A. “Downtown Redevelopment Strategies in the United States: An End-of-the-Century Assessment.” Journal of the American Planning Association 61, no. 4 (1995): 429-437. Part 6: New Ways of Seeing, New Ways of Planning 20 Landscape, the Environment and the City Questions of the day: How has concern for the landscape, open space, environment and quality of life shaped cities? Can cities be truly “green”? Guest speaker: Thomas Oles 21 Natural Processes Guest speaker: Thomas Oles 22 Transportation and Its Impacts Question of the day: How has public transportation policy shaped urban form? Geddes, Norman Bel. Excerpt from “Magic Motorways.” In Building the Nation: Americans Write about Their Architecture, Their Cities, and Their Landscape. Edited by Steven Conn and Max Page. Philadelphia, PA: University of Pennsylvania Press, 2003, pp. 228-231. ISBN: 0812218523. Downs, Anthony. “Remedies That Increase Residential Densities,” “Changing the Jobs-Housing Balance,” and “Concentrating Jobs in Large Clusters.” In Still Stuck in Traffic. Washington, DC: Brookings Institution Press, 2004, pp. 79-126. ISBN: 0815719299. In Class Video: Klein, Jim and Martha Olson. Taken for a Ride, selections. VHS. Harriman, NY: New Day Films, 1996. 23 The Rise of Community Activism Questions of the day: How has community participation changed urban design and development? Can urban development be a force for social equity? Guest speaker: Lizbeth Heyer, Associate Director of Community Development, Jamaica Plain Neighborhood Development Corporation. Peirce, Neal R., and Robert Guskind. “Boston’s Southwest Corridor: People Power Makes History.” In Breakthroughs: Recreating the American City. New Brunswick, NJ: Center for Urban Policy Research, 1993, pp. 83-114. ISBN: 0882851454. 24 The Virtual City Question of the day: How have advances in telecommunications technology changed the way we use and conceive cities? Guest speaker: Dennis Frenchman Mitchell, William J. “March of the Meganets,” and “Homes and Neighborhoods.” In E-topia: “Urban Life, Jim, But Not As We Know It”. Cambridge, MA: MIT Press, 1999. ISBN: 0262632055. Read “Urban Renewal, the Wireless Way.” 25 The Secure City - The Fortification of Space Question of the day: How are concerns about safety and security shaping public space and redefining communities? Low, Setha. “Unlocking the Gated Community,” and “Fear of Others.” In Behind the Gates: Life, Security, and the Pursuit of Happiness in Fortress America. New York, NY: Routledge, 2004, pp. 7-26 and 133-152. ISBN: 0415950414. Kostof, Spiro. “Keeping Apart.” In The City Assembled: The Elements of Urban Form Through History. London, UK: Thames & Hudson, 2005. pp. 102-110. ISBN: 0500281726. Davis, Mike. “Fortress Los Angeles: The Militarization of Urban Space.” Variations on a Theme Park: The New American City and the End of Public Space. Edited by Michael Sorkin. New York, NY: Noonday, 1992. ISBN: 0374523142. Zinganel, Michael. “Crime Does Pay! How Security Technology, Architecture and Town Planning Are Powered by Crime.” Archis (March 2002): 44-50. Please skim: National Capital Planning Commission, The National Capital Urban Design and Security Plan. October 2002. (PDF) Read page 4 Key Findings and Recommendations in “Designing for Security in the Nation’s Capital.” (PDF - 2.7 MB) 26 Discussion of Final Paper 27 Final Exam
|
common_crawl_ocw.mit.edu_112
|
Assignment
In the United States, passing gun control measures over the past twenty years has proven very difficult. Yet, in a similar time frame, both the U.K. and Australia enacted significant gun control reforms. Why has it been so much harder to pass gun control legislation in the U.S. than other countries?
Please write a five-page essay in which you make and defend an argument about why gun control reforms have proven more difficult to pass in the United States than in other countries. Analyze both the politics and policy options surrounding gun control reforms.
In the conclusion of your essay, explain the implications of your argument for gun control advocates. Specifically, what would you advise gun control advocates to do differently in order to enact stronger gun control in the United States? This analysis should include a discussion of both political strategy and specific policy proposals.
Note: In this essay, we will be doing a peer review. Prepare a full draft of your essay before class session 13, and print out a copy to bring to class. Your TA will collect your essay, and bring it to recitation for a peer edit. Once you’ve received peer feedback, revise your essay before handing it in at class session 14.
Student Examples
The examples below appear courtesy of MIT students and are used with permission. Examples are published anonymously unless otherwise requested.
The Difficulty of Passing Gun Control in the United States (PDF)
|
common_crawl_ocw.mit.edu_113
|
Assignment
In order to change policy, advocates must make smart decisions about when and where to push for policy change.
Write a 5-page essay in which you compare and contrast the strategic choices made by proponents of immigration and same-sex marriage over the past 10–15 years. For instance, advocates have to make choices about when to seek policy changes, the appropriate policy outcome to pursue at a given point in time, and the best political venues to seek policy changes.
Be sure to analyze how advocates’ strategic choices evolved in response to changes in the political context (e.g., public opinion, elections, etc). In the conclusion of your essay, discuss where you believe proponents of policy change on these two issues are likely to focus their advocacy efforts over the next two years.
Student Examples
The examples below appear courtesy of MIT students and are used with permission. Examples are published anonymously unless otherwise requested.
The Strategies of Gay Marriage and Immigration Reform Advocates (PDF)
Strategic Policy Efforts for Immigration and Same-Sex Marriage Advocates (PDF)
Comparing the Strategic Efforts of Gay Marriage and Immigration Reform Advocates (PDF)
|
common_crawl_ocw.mit.edu_114
|
Course Overview
This page focuses on the course 11.002J/17.30J Making Public Policy as it was taught by Prof. Christopher Warshaw and Leah Stokes in Fall 2014.
This course aimed to get students thinking about politics and policy as a part of their everyday lives. We treated politics as a struggle among competing advocates trying to persuade others to see the world as they do, working within a context that is structured primarily by institutions and cultural ideas. We began by developing a policymaking framework, understanding ideology, and taking a whirlwind tour of the American political system. Then, we examined six policy issues in depth: health care, gun control, the federal budget, immigration reform, same-sex marriage, and energy and climate change. We concluded the course with a summary class and a student-driven, in-class oral project.
Course Outcomes
Course Goals for Students
- Acquire substantive knowledge about public policy in the US
- Develop critical reasoning skills
- Analyze policy and understand political arguments
- Improve oral and written communication skills
Instructor Interview
Below, Leah Stokes describes various aspects of how she and Prof. Chris Warshaw taught 11.002J/17.30J Making Public Policy.
For many students at MIT, public policy, political science, planning, and even social science, more generally, are not their primary fields of study. In order to ensure that students remained engaged throughout the course, we tried not to make lectures a one-way experience. We actively called on students throughout the class, including cold calling to check that they were keeping up with the readings and grasping the main ideas. Furthermore, we embedded interactive class activities in many of the lectures, allowing the students to discuss topics with each other in small groups and to hear different points of views.
Curriculum Information
Prerequisites
No previous coursework required.
Requirements Satisfied
CI-H
HASS
HASS-S
- Requirement for a Bachelor of Science in Planning
- 11.002J/17.30 can be applied toward a Bachelor of Science in Environmental Engineering Science, but is not required
- 11.002J/17.30 can be applied toward a Bachelor of Science in Political Science, but is not required
Offered
Every fall semester
Assessment
The students’ grades were based on the following activities:
- 65% Papers (varying percentages)
- 10% Final Oral Project
- 25% Class Participation
Student Information
Enrollment
Limiting the class to between 50 and 60 students is ideal because it would be difficult to have active classroom discussions with a class size larger than this. Having 50 to 60 students also allows the teaching assistants to divide the class into four reasonably-sized recitation sections.
How Student Time Was Spent
During an average week, students were expected to spend 12 hours on the course, roughly divided as follows:
In Class
- Met 2 times per week for 1.5 hours per session; 26 sessions total; mandatory attendance
Recitation
- Met 1 time per week for 1 hour per session; 12 sessions total; mandatory attendance
Out of Class
- Weekly readings, which provided background information and a broader theoretical framework for analyzing policy issues
- Four papers on specific policy issues featured during the course
Course Team Roles
Instructors (Prof. Chris Warshaw & Leah Stokes)
The instructors ran lecture sessions, trading off based on policy modules and their different backgrounds and areas of expertise.
Teaching Assistant
Two teaching assistants facilitated weekly recitation sessions and managed grading.
|
common_crawl_ocw.mit.edu_115
|
In addition to reading the material for each class session, students are expected to stay up to date with current U.S. public policy news. Suggested ways include reading a daily newspaper (The New York Times, The Wall Street Journal, The Washington Post, or The Boston Globe) or listening to radio news. SES # TOPICS READINGS 1 Introduction None I: A (Brief) Policy and Politics Framework 2 An Introduction to Policy and Politics Theory Readings Ideology Friedman, Milton. Capitalism and Freedom. University of Chicago Press, 2002, pp. 1–36. ISBN: 9780226264219. [Preview with Google Books] Kuttner, Robert. “The Limits of Markets.” The American Prospect, December 19, 2001. Romer, John E. Equality of Opportunity. Harvard University Press, 2000, pp. 1–12. ISBN: 9780674004221. [Preview with Google Books] Public Policymaking Kingdon, John W. Chapter 9 in Agendas, Alternatives, and Public Policies. 2nd ed. Harper Collins, 1995 pp. 196–208. Electoral Connection Mayhew, David. Congress: The Electoral Connection. Yale University Press, 1974, pp. 13–17. Stimson, James, Michael MacKuen, and Robert S. Erikson. “Dynamic Representation.” In Principles and Practice of American Politics. CQ Press, 2012, pp. 466–80. II: Setting the Agenda and Shaping Policy Options 3 Health Care: Interest Groups and Public Opinion – Defining the Problem in the Contemporary Era Case Readings “The Health Report to the American People.” Citizens’ Health Care Working Group. March 31, 2006. (Read section I, “A Snapshot of Healthcare Issues in America,” 1–2, and Section V, “Health Care Access: Not Getting the Health Care You Need,” 15–19.) Skocpol, Theda. “Introduction.” In Boomerang Health Care Reform and the Turn against Government. W. W. Norton & Company, 1997, pp. 1–19. ISBN: 9780393315721. Blendon, Robert J., and John M. Benson. “Americans’ Views on Health Policy: A Fifty-Year Historical Perspective.” Health Affairs 20, no. 2 (2001): 33–46. Blendon, Robert J., and John M. Benson. “Public Opinion at the Time of the Vote on Health Care.” New England Journal of Medicine 362 (2010): 55. 4 Health Care: Passing Health-Care Reform in Congress Theory Readings Congressional Action Arnold, R. Douglas. The Logic of Congressional Action. Yale University Press, 1990, pp. 3–16. ISBN: 9780300048346. Krehbiel, Keith. Pivotal Politics: A Theory of U.S. Lawmaking. University of Chicago Press, 1998, pp. 20–48. ISBN: 9780226452722. [Preview with Google Books] Case Readings Cohn, Jonathan. “How They Did It.” The New Republic, June 2010, 14–25. Oberlander, Jonathan. “Long Time Coming: Why Health Reform Finally Passed.” Health Affairs 29, no. 6 (2010): 1112–6. “Public Health Insurance Option,” The Huffington Post, March 25, 2010. Cannon, Michael F. “Yes, Mr. President. A Free Market Can Fix Health Care.” (PDF) Cato Institute Policy Analysis, no. 650 (2009). 5 Health Care: Implementing Health-Care Reform Case Readings Introduction Skocpol, Theda. “The Political Challenges That May Undermine Health Reform.” Health Affairs 29, no. 7 (2010): 1288–92. Legal Challenges Liptak, Adam. “Health Law Puts Focus on Limits of Federal Power,” The New York Times, November 13, 2011. “A Clean Bill of Health,” The Economist, July 28, 2012. State-level Implementation Jost, Timothy. “Implementing Health Reform: A GAO Progress Report on the Exchanges.” Health Affairs Blog, June 2013. Luthra, Shefali. “Promoting Health Insurance Exchange, with No Help from State,” The New York Times, July 18, 2013. Abelson, Reed. “Choice of Health Plans to Vary Sharply from State to State,” The New York Times, June 16, 2013. Campbell, Andrea Louise. “The Future of U.S. Health Care,” Boston Review, August 2012. 6 Policy Evaluation: Health Care and Gun Control Theory Readings Policy Evaluation Singleton, Royce A., and Bruce C. Straits. Approaches to Social Research. 5th ed. Oxford University Press, 2009, pp. 1–12. ISBN: 9780195372984. Howlett, Michael, M. Ramesh, and Anthony Perl. “Policy Evaluation: Policy-Making as Policy Learning.” In Studying Public Policy: Policy Cycles and Policy Subsystems. Oxford University Press, 2009, pp. 171–85 ISBN: 9780195428025. Case Readings Health Care and Gun Control Massachusetts Health Care Reform: Six Years Later (PDF). Kaiser Family Foundation. 2012. (read Executive Summary on p.1 and table on p. 9 closely; skim the rest) Baicker, Katherine, et al. “The Oregon Experiment—Effects of Medicaid on Clinical Outcomes.” New England Journal of Medicine 368 (2013): 1713–22. Cohn, Jonathon. “Obamacare’s Impact on the Uninsured, State by State.” The New Republic, August 2014. Miller, Michal, Deborah Azrael, and David Hemenway. “Firearms and Violent Death in the United States.” In Reducing Gun Violence in America: Informing Policy with Evidence and Analysis. Johns Hopkins University Press, 2013. ISBN: 9781421411101. [Preview with Google Books] Aneja, Abhay, et al. “The Impact of Right to Carry Laws and the NRC Report: The Latest Lessons for the Empirical Evaluation of Law and Policy.” NBER Working Paper no. 18294, 2012. VerBruggen, Robert. “More Handguns, Less Crime—or More?” The American Spectator, June 2010. 7 Gun Control: Agenda Setting and Issue Framing Theory Readings Advocates (Interest Groups) and Their Stories Stone, Deborah. “Causal Stories and the Formation of Policy Agendas.” Political Science Quarterly 104, no. 2 (1989): 281–300. Case Readings Lepore, Jill. “Battleground America,” The New Yorker, April 23, 2012. Toobin, Jeffrey. “So You Think You Know the Second Amendment?” The New Yorker, December 17, 2012. Goldberg, Jeffrey. “The Case for More Guns (And More Gun Control).” The Atlantic Monthly, December 2012. The Institute for Legislative Action: The Lobbying Arm of the NRA. National Rifle Association. Explore the Everytown for Gun Safety website. Explore the Americans for Responsible Solutions website. 8 Gun Control: Organized and Unorganized Interests Guest Lecture: Professor Regina Bateson Theory Readings Interest Group Organization Olson, Mancur. “The Logic of Collective Action.” In The Enduring Debate: Classic and Contemporary Readings in American Politics. W. W. Norton & Company, 2013, pp. 425–33. ISBN: 9780393921588. Bateson, Regina. “Crime Victimization and Political Participation.” American Political Science Review 106, no. 3 (2012): 570–87. (Read intro/conclusion; skim regression results) Case Readings Goss, Kristin A. Chapter 1 in Disarmed: The Missing Movement for Gun Control in America. Princeton University Press, 2006. ISBN: 9780691124247. [Preview with Google Books] Zernike, Kate. “Christie Veto of Gun Control Bill Angers Relatives of Newtown Victims,” The New York Times, July 3, 2014. Weisman, Jonathan. “Senate Blocks Drive for Gun Control,” The New York Times, April 17, 2013. Cobb, Jelani. “Perceived Threats,” The New Yorker, July 29, 2013. Barron, James. “Taking a Bullet, Gaining a Cause: James S. Brady Dies at 73,” The New York Times, August 4, 2014. North, Michael J. “Gun Control in Great Britain after the Dunblane Shootings.” In Reducing Gun Violence in America: Informing Policy with Evidence and Analysis. Johns Hopkins University Press, 2013. ISBN: 9781421411101. [Preview with Google Books] 9 Gun Control: Policy and Evaluation Case Readings Gopnik, Adam. “A Few Simple Ideas about Gun Control,” The New Yorker, October 1, 2013. Christie, Drew. “The Ghost of Gun Control.” April 22, 2013. The New York Times, Accessed June 24, 2015. http://www.nytimes.com/video/opinion/100000002184971/the-ghost-of-gun-control.html Peters, Rebecca. “Rational Firearm Regulation: Evidence-based Gun Laws in Australia.” In Reducing Gun Violence in America: Informing Policy with Evidence and Analysis. Johns Hopkins University Press, 2013. ISBN: 9781421411101. [Preview with Google Books] Alpers, Philip. “The Big Melt: How One Democracy Changed after Scrapping a Third of its Firearms.” In Reducing Gun Violence in America: Informing Policy with Evidence and Analysis. Johns Hopkins University Press, 2013. ISBN: 9781421411101. [Preview with Google Books] Miller, Michal, Deborah Azrael, and David Hemenway. “Consensus Recommendations for Reforms to Federal Gun Policies.” In Reducing Gun Violence in America: Informing Policy with Evidence and Analysis. Johns Hopkins University Press, 2013. ISBN: 9781421411101. [Preview with Google Books] Cook, Philip J., and Jens Ludwig. “The Limited Impact of the Brady Act.” In Reducing Gun Violence in America: Informing Policy with Evidence and Analysis. Johns Hopkins University Press, 2013. ISBN: 9781421411101. III: Making Policy Decisions 10 Federal Budget: Recession, Deficit, and Problem Definition Case Readings Problem Definition Congressional Digest. “The Deficit and the Debt: Meeting America’s Fiscal Challenges,” February 2010. National Commission on Fiscal Responsibility and Reform (Simpson-Bowles Commission). “The Moment of Truth,” (PDF - 1.5MB) December 2010, pp. 6–16. Krugman, Paul. “Nobody Understands Debt,” The New York Times, January 1, 2012. Burman, Leonard. Taxes and Inequality (PDF), 2014. Case Readings Interest Groups Wasson, Erik. “Liberals Fear Betrayal by Obama on Social Security,” The Hill, January 5, 2011. 11 Federal Budget: Recession, Deficit, and Problem Definition (cont.) Case Readings Political Context Zeleny, Jeff. “G. O. P. Captures House, but Not Senate,” The New York Times, November 2, 2010. Case Readings The Debt Ceiling, Fiscal Cliff, and Sequester Cha, Ariana Eunjung. “What’s the Debt Ceiling, and Why is Everyone in Washington Talking about It?” The Washington Post, April 18, 2011. Dennis, Brady, Alex MacGillis, et al. “The Origins of a Showdown,” The Washington Post, August 7, 2011. Weisman, Jonathan. “Q & A: Understanding the Fiscal Cliff,” The New York Times, November 14, 2012. In class Watch Cliffhanger (Frontline documentary) In recitation Solve the deficit 12 Federal Budget: Legislative Tactics – The Fiscal Cliff and the Sequester Case Readings Ball, Molly. “What Do the People Want From the Fiscal-Cliff Deal?” The Atlantic Monthly, November 2012. Bresnahan, John, Carrie Budoff Brown, et al. “The Fiscal Cliff Deal that Almost Wasn’t,” Politico, January 2, 2013. “Nothing to be Proud Of.” The Economist, January 5, 2013. Matthews, Dylan. “The Sequester: Absolutely Everything You Could Possibly Need to Know, in One FAQ,” The Washington Post, February 20, 2013. 13 Immigration Reform: History, Economic Impacts, and Current Policy Case Readings Clarkson, Stephen, and Matto Mildenberger. “Supplying Workers for the US Labour Market” In Dependent America: How Canada and Mexico Construct U.S. Power. University of Toronto Press, 2011. ISBN: 9781442612778. [Preview with Google Books] Immigration Policy Center. “How the United States Immigration System Works: A Fact Sheet.” 2014. Card, D. “The Impact of the Mariel Boatlift on the Miami Labor Market.” Industrial and Labor Relations Review 43, no. 2 (1990): 245–57. (Skim) Borjas, George J. “For a Few Dollars Less,” The Wall Street Journal, April 2006. Lowenstein, Roger. “The Immigration Equation,” The New York Times Magazine, July 2006. 14 Immigration Reform: Legislative Efforts and the Electoral Connection Theory Readings Executive’s Relations with Congress Kingdon, John W. Agendas, Alternatives and Public Policies. 2nd ed. Longman, 1995, pp. 21-30. Davidson, Roger H. “Presidential Relations with Congress.” In Understanding the Presidency. 6th ed. Longman, 2011, pp. 253–67. Case Readings Lizza, Ryan. “Getting to Maybe,” The New Yorker, June 24, 2013. Weiner, Rachel. “How Immigration Reform Failed, Over and Over,” The Washington Post, January 30, 2013. PBS NewsHour. “Will 2014 Yield Immigration Reform?” December 23, 2013. YouTube. Accessed June 24, 2015. http://www.youtube.com/watch?v=yN0hoiok8pA Weisman, Jonathan. “On Immigration, G. O. P. Starts to Embrace Tea Party,” The New York Times, August 2, 2014. Lopaz, Mark Hugo, and Paul Taylor. “Latino Voters in the 2012 Election.” Pew Research Hispanic Trends Project, 2012. Vavreck, Lynn. “It’s Not Too Late for Republicans to Win Latino Votes,” The New York Times, August 11, 2014. 15 Immigration Reform: Executive Action Theory Readings Executive Branch and Public Policy Moe, Terry M., and William G. Howell. “Unilateral Action and Presidential Power: A Theory.” Presidential Studies Quarterly 29, no. 4 (1999): 850–73. Case Readings Lind, Dara. “How a Controversial Obama Program is Bringing Young Immigrants out of the Shadows.” Vox, 2014. Hager, Emily B., and Natalia V. Osipova. “One Family Faces the Immigration Debate.” August 18, 2014. The New York Times. Accessed on June 24, 2015. http://www.nytimes.com/video/us/100000003031733/one-family-faces-the-immigration-debate.html Davis, Julie H. “Behind Closed Doors, Obama Crafts Executive Actions,” The New York Times, Aug 8, 2014. Washington Post Editorial Board. “Frustration Over Stalled Immigration Action doesn’t mean Obama can act Unilaterally,” The Washington Post, Aug 5, 2014. New York Times Editorial Board. “Mr. Obama, Your Move,” The New York Times_,_ August 9, 2014. Gerstein, Josh. “Barack Obama’s Immigration Moves could be Unstoppable,” Politico, July 2014. 16 Gay Marriage: Making Social Policy at the Federal Level Case Readings Hoch, Maureen. “The Battle over Same-Sex Marriage: Marriage and the States,” PBS Online NewsHour, April 30, 2004. “A Massachusetts Court Starts a National Debate That Poses Problems for Both the Republicans and the Democrats.” The Economist, November 20, 2003. Stout, David. “Bush Backs Ban in Constitution on Gay Marriage,” The New York Times, February 24, 2004. Murray, Shailagh. “Gay Marriage Amendment Fails in Senate,” The Washington Post, June 8, 2006. Kaminer, Wendy. “Why Do We Care What Obama Thinks About Gay Marriage?” The Atlantic Monthly, May 2012. Sorensen, Adam. “Obama’s Persuasive Powers on Gay Marriage Manifest in Maryland,” Time, May 24, 2012. 17 Gay Marriage: Making Policy in the States Theory Readings Public Opinion and Federalism Peterson, Paul. “The Price of Federalism.” In The Enduring Debate: Classic and Contemporary Readings in American Politics. W. W. Norton & Company, 2013, pp. 73–81. ISBN: 9780393921588. Lax, Jeffrey, and Justin Philips. “Gay Rights in the States: Public Opinion and Policy Responsiveness.” American Political Science Review 103, no. 3 (2009). (Skim) Case Readings Graff, E. J. “Marital Blitz.” The American Prospect 17, no. 3 (2006). “Dispatches from the Culture Wars: Bad News for Gays, Good News for Stoners,” The Economist, November 6, 2008. Gelman, Andrew, Jeffrey Lax, et al. “Over Time. A Gay Marriage Groundswell,” The New York Times, August 22, 2010. Ball, Molly. “A Coming Wave of Gay Marriage Electoral Victories?” The Atlantic Monthly, August 2011. Ben Brumfield. “Voters Approve Same-sex Marriage for the First Time.” CNN.com, November 7, 2012. 18 Gay Marriage: Making Policy in Court Theory Readings Judicial Policymaking Tarr, Alan. Judicial Process and Judicial Policymaking. 3rd ed. Thompson Custom Publishing, 2003, pp. 281–88. ISBN: 9780534037130. Friedman, Leon. “Overruling the Court.” In The Enduring Debate: Classic and Contemporary Readings in American Politics. W. W. Norton & Company, 2013, pp. 263–67. ISBN: 9780393921588. Case Readings Talbot, Margaret. “A Risky Proposal,” The New Yorker, January 18, 2010. Duncan, William. “Same-Sex Marriage: The Tortuous Road to the Supreme Court,” Supreme Court of the United State Blog, August 17, 2011. Tribe, Lawrence. “The Constitutional Inevitability of Same-Sex Marriage.” Supreme Court of the United State Blog, August 26, 2011. (Read excerpt only.) Liptak, Adam. “Supreme Court Bolsters Gay Marriage with Two Major Rulings,” The New York Times, June 26, 2013. 19 Guest Speaker None IV: Putting It All Together: The Policymaking Process 20 Climate Change: Science, Public Opinion, and the Media Case Readings Boykoff, Maxwell T., and M. Jules. “Balance as Bias: Global Warming and the U.S. Prestige Press.” Global Environmental Change 14, no. 2 (2004): 125–36. (Skim) Jacques, Peter J., Riley E. Dunlap, et al. “The Organisation of Denial: Conservative Think Tanks and Environmental Skepticism.” Environmental Politics 17, no. 3 (2008): 349–85. (Skim) Hansen, James, Makiko Sato, et al. “Perception of Climate Change.” Proceedings of the National Academy of Sciences of the United States of America 109 no. 37 (2012): E2415–23. (Skim) Kolbert, Elizabeth. “The Catastrophist,” The New Yorker, June 29, 2009. Banerjee, Neela. “Scientist Proves Conservatism and Belief in Climate Change aren’t Incompatible,” Los Angeles Times, January 11, 2005. Klein, Ezra. “How Politics Makes us Stupid.” Vox. April 2014. Yale Project on Climate Change Communication. “Politics & Global Warming, Spring 2014” 2014 (Skim for key facts). 21 Climate Change: Federal Action in Congress and the Executive Branch Case Readings Lizza, Ryan. “As the World Burns,” The New Yorker, October 11, 2010. Davenport, Coral, and Peter Baker. “Taking Page From Health Care Act, Obama Climate Plan Relies on States,” The New York Times, June 2, 2014. Kolbert, Elizabeth. “Power Politics: Obama’s Overdue Climate-Change Speech,” The New Yorker, June 25, 2014. U.S. Environmental Protection Agency_._ “Clean Power Plan Explained.” June 2, 2004. YouTube. Accessed June 24, 2015. https://www.youtube.com/watch?v=AcNTGX_d8mY Liptak, Adam. “Justices Uphold Emission Limits on Big Industry,” The New York Times, June 23, 2014. Gillis, Justin, and Michael Wines. “In Some States, Emissions Cuts Defy Skeptics,” The New York Times, June 6, 2014. 22 Climate Change: Policy Options Case Readings Stavins, Robert. “Market-Based Environmental Policies: What Can We Learn From the U.S. Experience (and Related Research)?” In Moving to Markets in Environmental Regulation: Lessons From Twenty Years of Experience. Edited by Jody Freeman and Charles D. Kolstad. Oxford University Press, 2006, pp. 19–47. ISBN: 9780195189650. [Preview with Google Books] Victor, David. “Climate Change: Debating America’s Policy Options.” Council on Foreign Relations, 2014, pp. 1–8. Khosla, Vinod. “A Simpler Path to Cutting Carbon Emissions,” The Washington Post, July 2, 2010. Struck, Doug. “Buying Carbon Offsets May Ease Eco-Guilt but Not Global Warming,” Christian Science Monitor, April 20, 2010. Kolbert, Elizabeth. “Hosed; Is There a Quick Fix for the Climate?” The New Yorker, November 16, 2009. EPA State and Logcal Climate and Energy Program. “Renewable Energy.” U.S. Environmental Protection Agency. Rabe, B. G. Chapter 1 in Statehouse and Greenhouse: The Emerging Politics of American Climate Change Policy. Brookings Institution Press, 2004. ISBN: 9780815773092. (Skim) 23 Climate Change: State Action and the Arizona Net-Metering Case Case Readings Rabe, B. G. “Race to the Top: The Expanding Role of U.S. State Renewable Portfolio Standards.” Sustainable Development Law & Policy 7, no. 3 (2007): 10–16. Barnes, Justin, and Chelsea Barnes. “2013 RPS Legislation: Gauging the Impacts.” (PDF) Solar Today 27, no. 7 (2013): 17–19. Cardwell, Diane. “A Pushback on Green Power,” The New York Times, May 28, 2014. Randazzo, Ryan, and Robert Anglen. “APS, Solar Companies Clash over Credits to Customers.” The Arizona Republic, October 21, 2013. The Arizona Republic. “Solar Future Up For Grabs in Arizona.” Accessed June 24, 2015. Explore the Alliance for Solar Choice website. Arizona’s Energy Future. “Net Metering.” Arizona Public Service. Prosper Org. “Ice Cream for Fairness!” October 21, 2013. YouTube. Accessed June 24, 2015. https://www.youtube.com/watch?v-zJ8tToIeQ_U Arizona Solar Facts. “Corporate Welfare.” July 2, 2013. YouTube. Accessed June 24, 2015. https://www.youtube.com/watch?v=gZOi-_sPF6s Arizona Solar Facts. “Who is TUSK?” September 11, 2013. YouTube. Accessed June 24, 2015. https://www.youtube.com/watch?v=RBL2fOZNLzg Davenport, Coral. “Pushing Climate Change as an Issue This Year, but With an Eye on 2016,” The New York Times, May 22, 2014. Schwartz, John. “Fissures in G. O. P. as Some Conservatives Embrace Renewable Energy,” The New York Times, January 25, 2014. V: Wrapping It Up 24 Summary Class None 25 Final Oral Project None 26 Final Oral Project (cont.) None
|
common_crawl_ocw.mit.edu_116
|
Course Description
This course introduces undergraduates to the basic theory, institutional architecture, and practice of international development. We take an applied, interdisciplinary approach to some of the “big questions” in our field. This course will unpack these questions by providing an overview of existing knowledge …
This course introduces undergraduates to the basic theory, institutional architecture, and practice of international development. We take an applied, interdisciplinary approach to some of the “big questions” in our field. This course will unpack these questions by providing an overview of existing knowledge and best practices in the field. The goal of this class is to go beyond traditional dichotomies and narrow definitions of progress, well-being, and culture. Instead, we will invite students to develop a more nuanced understanding of international development by offering an innovative set of tools and content flexibility.
Course Info
Learning Resource Types
assignment
Problem Sets
|
common_crawl_ocw.mit.edu_117
|
Weekly Memos and Responses
The class will be divided randomly into two groups: Group A and Group B. In each class, one group will write a memo, and the other will write a response to the memos written by their colleagues. The groups will alternate writing memos and responses. These assignments are to be based on the required readings for each class.
Mini Essays
Students will complete three mini-essays at the end of Units 1, 2, and 3. The deadline for submission is the first day of class of the subsequent unit. The goal of these essays is to give students an opportunity to reflect upon the issues discussed in class during each unit.
Final Project
Students, either individually or in groups of 2–3, will develop a proposal for a development intervention of their choice, as if they were to be presented to a funding / supporting organization for actual implementation. The idea is to expose students to the process through which development ideas are transformed into practice, preparing them for future work in the international development field.
Extra Credit
At the beginning of the semester, students will decide collectively on an alternative online platform (such as a Facebook group or online blog) to use for further class discussion. The engagement with the class through this alternative platform will be voluntary, but active participants will receive extra credit toward their final grade.
|
common_crawl_ocw.mit.edu_118
|
Course readings. SES # TOPICS READINGS 1 Introduction to International Development None Unit 1: Critically Conceptualizing, Contextualizing, and Historicizing International Development 2 Development and the Colonial Legacy Required Readings Acemoglu, Daron., et al. “The Colonial Origins of Comparative Development: An Empirical Investigation.” The American Economic Review 91, no. 5 (2001): 1369–401. Ranger, Terence. “The Invention of Tradition in Colonial Africa.” Chapter 6 in The Invention of Tradition. Edited by T. O. Ranger and E. J. Hobsbawm. Cambridge University Press, 1983, p. 211. ISBN: 9780521246453. [Preview with Google Books] Optional Readings Nunn, Nathan. “The Long-term Effects of Africa’s Slave Trades.” National Bureau of Economic Research Working Paper No. 13367, 2007. Graeber, David. Chapter 1 in Debt: The First 5,000 Years. Melville House, 2011. ISBN: 9781933633862. Fanon, Frantz. The Wretched of the Earth. Vol. 390. Grove Press, 1966. Mitchell, Timothy. Colonising Egypt: With a New Preface. University California Press, 1991. ISBN: 9780520075689. [Preview with Google Books] Hobsbawm, Eric. Chapter 3 in The Age of Empire: 1875–1914. Pantheon Books, 1987. ISBN: 9780394563190. 3 The Ethical Underpinnings of Development Required Readings Chambers, Robert. Chapter 3 in Whose Reality Counts?: Putting the First Last. Intermediate Technology Publications, 1997. ISBN: 9781853393860. Giri, Anta Kumar, and Philip Quarles van Ufford, eds. Chapter 1 in A Moral Critique of Development: In Search of Global Responsibilities. Routledge, 2003. ISBN: 9780415276252. Krisch, Joshua A. “When Racism Was a Science - ‘Haunted Files: The Eugenics Record Office’ Recreates a Dark Time in a Laboratory’s Past,” The New York Times, October 13, 2014. Optional Readings Falk, Richard, Balakrishnan Rajagopal, Jacqueline Stevens, eds. International Law and the Third World: Reshaping Justice. Routledge Cavendish, 2008. ISBN: 9780415439787. [Preview with Google Books] Sen, Amartya K. “Rational Fools: A Critique of the Behavioral Foundations of Economic Theory.” Philosophy and Public Affairs 6, no. 4 (1977): 317–44. Sandel, Michael. What Money Can’t Buy: The Moral Limits of Markets. Farrar, Straus and Giroux, 2012. ISBN: 9780374203030. Ferguson, James. Chapter 2 in The Anti-politics Machine: “Development,” Depoliticization, and Bureaucratic Power in Lesotho. Cambridge University Press, 1990. ISBN: 9780521373821. 4 International Development as Concept and Narrative Required Readings Rostow, W. W. Chapters 1 and 2 in The Stages of Economic Growth: A Non-communist Manifesto. Cambridge University Press, 1991. ISBN: 9780521400701. [Preview with Google Books] Escobar, Arturo. “The Problematization of Poverty: The Tale of Three Worlds and Development.” In Encountering Development: The Making and Unmaking of the Third World. Princeton University Press, 2011, pp. 21–55. ISBN: 9780691150451. [Preview with Google Books] Optional Readings Fukuda-Parr, Sakiko. “Recapturing the Narrative of International Development.” In The Millennium Development Goals and Beyond: Global Development After 2015. Vol. 65. Edited by R. Wilkinson and D. Hulme. Routledge, 2012. ISBN: 9780415621632. [Preview with Google Books] Sen, Amartya. Development as Freedom. Oxford University Press, 1999. ISBN: 9780198297581. [Preview with Google Books] Esteva, Gustavo. “Development.” In The Development Dictionary: A Guide to Knowledge as Power. 2nd ed. Zed Books, 2010. ISBN: 9781848133808. Hulme, David. “Poverty and Development Thinking: Synthesis or Uneasy Compromise?” International Development: Ideas Experience, and Prospects, 2013. (BWPI Working Paper 180) 5 Measuring Development Required Readings Banerjee, Abhijit, and Esther Duflo. Chapter 1 in Poor Economics: A Radical Rethinking of the Way to Fight Global Poverty. Public Affairs, 2011. ISBN: 9781586487980. Jerven, Morten. Poor Numbers: How We are Misled by African Development Statistics and What to do About it. Cornell University Press, 2013, pp. 8–32. ISBN: 9780801451638. [Preview with Google Books] Optional Readings Klugman, Jeni, Francisco Rodríguez, et al. “The HDI 2010: New Controversies, Old Critiques.” The Journal of Economic Inequality 9, no. 2 (2011): 249–88. Ravallion, Martin. “The Human Development Index: A Response to Klugman, Rodriguez and Choi.” The Journal of Economic Inequality 9, no. 3 (2011): 475–78. Stone, Deborah A. “Causal Stories and the Formation of Policy Agendas.” Political Science Quarterly 104, no. 2 (1989): 281–300. Kenny, Charles, and David Williams “What Do We Know about Economic Growth? Or, Why Don’t We Know Very Much?” World Development 29, 1 (2001): 1–22. Angrist, Joshua D., and Jörn-Steffen Pischke. “Introduction.” Chapter 1 in Mastering’ Metrics: The Path from Cause to Effect. Princeton University Press, 2014. ISBN: 9780691152844. [Preview with Google Books] Reddy, Sanjay G. “Randomize This! On Poor Economics.” Review of Agrarian Studies 2, no. 2 (2012). 6 Identities in Development: Inserting “Who We Are” in Relation to a Diverse Development Context Required Readings Schön, Donald A. Chapter 2 in The Reflective Practitioner: How Professionals Think in Action. Vol. 5126. Maurice Temple Smith Limited, 1983. ISBN: 9780851172316. Rodriguez, Richard. Chapter 2 in Hunger of Memory: The Education of Richard Rodriguez: An Autobiography. Bantam, 1983. ISBN: 9780553272932. Optional Readings Mehmet, Ozay. Chapter 1 in Westernizing The Third World: The Eurocentricity of Economic Development Theories. Routledge, 1999. ISBN: 9780415205733. Hall, Stuart, and Paul Du Gay, eds. Questions of Cultural Identity. Sage Publications, 1996. ISBN: 9780803978836. [Preview with Google Books] Miller, Byron. “Collective Action and Rational Choice: Place, Community, and the Limits to Individual Self-interest.” Economic Geography 68, no. 1 (1992): 22–42. Piore, Michael. Beyond Individualism: How Social Demands of the New Identity Groups Challenge American and Political Life. Harvard University Press, 1995. ISBN: 9780674068971. Said, Edward. Orientalism. Vintage, 1979. ISBN: 9780394740676. Sen, Gita, and Caren Grown. Development Crises and Alternative Visions: Third World Women’s Perspectives. Monthly Review Press, 1987. ISBN: 9780853457176. Gutmann, Matthew C. The Meanings of Macho: Being a Man in Mexico City. Vol. 3. University of California Press, 1996. ISBN: 9780520202368. [Preview with Google Books] Unit 2: Development: From Theories to Strategies 7 Modernization and Growth Paradigms Required Readings Solow, Robert. “A Contribution to the Theory of Economic Growth.” The Quarterly Journal of Development Economics 70, no. 1 (1956): 65–94. Lewis, W. Arthur. “Economic Development with Unlimited Supplies of Labor.” In The Economics of Underdevelopment. Edited by A. N. Agarwala and Sampat Pal Singh. Oxford University Press, 1971. (Selected notes) Optional Readings Easterly, William, and W. R. Easterly. Chapters 2 and 3 in The Elusive Quest for Growth: Economists’ Adventures and Misadventures in the Tropics. MIT Press, 2001. ISBN: 9780262050654. Rosenstein-Rodan, Paul. “Problems of Industrialization of Eastern and Southeastern Europe.” In The Economics of Underdevelopment. Edited by Agarwala and Singh. Oxford University Press, 1963. (Selected Pages) Lipton, Michael. “Balanced and Unbalanced Growth in Underdeveloped Countries.” The Economic Journal (1959): 641–57. Hirschman, Albert O., and Charles E. Lindblom. “Economic Development, Research and Development, Policy Making: Some Converging Views.” Behavioral Science 7, no. 2 (1962): 211–22. 8 Easier Said than Done: Dependency and the First Challenges of the Development Agenda Required Readings Pritchett, Lant. “Divergence, Big Time.” Journal of Economic Perspectives 11, no. 3 (1997): 3–17. Cardoso, Fernando Henrique, and Enzo Faletto. Chapters 1 and 2 in Dependency and Development in Latin America. University of California Press, 1979. ISBN: 9780520035270. Optional Readings Chenery, Houis B., et al. Redistribution with Growth: Policies to Improve Income Distribution in Developing Countries in the Context of Economic Growth. Oxford University Press, 1974. ISBN: 9780199200696. Meadows, Donella H., D. L. Meadows, et al. The Limits to Growth. Signet, 1972. ISBN: 9780451057679. Prebisch, Raul. Towards a Dynamic Development Policy for Latin America. United Nations, 1963. Furtado, Celso. “Underdevelopment: to Conform or Reform?” In Pioneers in Development, Second Series. Edited by Gerald M. Meier. Oxford University Press, 1987, pp. 203–7. ISBN: 9780195205428. “Employment, Incomes and Equality: A strategy for Increasing Productive Employment in Kenya; Report of an Inter-agency Team Financed by the United Nations Development Programme and Organized by the International Labour Office).” International Labour Office 29, no. 1 (1974): 232–4. King, Loren A. “Economic Growth and Basic Human Needs.” International Studies Quarterly 42, no. 2 (1998): 385–400. Weigel, Van B. “The Basic Needs Approach: Overcoming the Poverty of “Homo Oeconomicus.” World Development 14, no. 12 (1986): 1423–34. 9 Development Strategies by Late-industrializing Countries Required Readings Amsden, Alice. “Introduction.” In The Rise of the Rest: Challenges to the West from Late-industrializing Economies. Oxford University Press, 2001. ISBN: 9780195139693. [Preview with Google Books] Bruton, Henry J. “A Reconsideration of Import Substitution.” (PDF 4.7MB) Journal of Economic Literature 36, no. 2 (1998): 903–36. Optional Readings Davis, Diane E. Discipline and Development: Middle Classes and Prosperity in East Asia and Latin America. Cambridge University Press, 2004. ISBN: 9780521002080. Gereffi, Gary. “Paths of Industrialization: An Overview.” In Manufacturing Miracles: Paths of Industrialization in Latin America and East Asia. Edited by G. Gereffi and D. Wyman. Princeton University Press, 1990. ISBN: 9780691077888. Mkandawire, Thandika. “Thinking about Developmental States in Africa.” Cambridge Journal of Economics 25, no. 3 (2001): 289–314. Wade, Robert. Governing the Market: Economic Theory and the Role of Government in East Asian Industrialization. Princeton University, 1990. ISBN: 9780691003979. Amsden, Alice H., and Takashi Hikino. “Borrowing Technology or Innovating: An Exploration of Two Paths to Industrial Development.” In Learning and Technological Change. Edited by R. Thomson. Palgrave Macmillan, 1993. ISBN: 9780312095918. Fagerberg, Jan, and M. Godinho. “Innovation and Catching-up.” Chapter 19 in The Oxford Handbook of Innovation. Edited by J. Fagerberg, D. Mowery, and R. Nelson. Oxford University Press, 2004, pp. 514–42. ISBN: 9780199286805. [Preview with Google Books] 10 The Debt Crisis, Globalization, and the Rise of the Washington Consensus Required Readings Williamson, John. “What Washington Means by Policy Reform.” Latin American Adjustment: How Much has Happened 7 (1990): 7–20. Ocampo, José Antonio. “The Latin American Debt Crisis in Historical Perspective.” Life After Debt: The Origins and Resolutions of Debt Crisis 87, 2014. Optional Readings Kindleberger, Charles. “Anatomy of a Typical Crisis.” In Manias, Panics and Crashes: A History of Financial Crisis. Wiley, 1986. ISBN: 9780471161929. Kahler, Miles. “Politics and International Debt: Explaining the Crisis.” International Organization 39, no. 3 (1985): 357–82. Broad, Robin. “The Washington Consensus Meets the Global Backlash: Shifting Debates and Policies.” (PDF) Globalizations 1, no. 2 (2004): 129–54. Stiglitz, Joseph E. Globalization and its Discontents. W. W. Norton & Company, 2003. ISBN: 9780393324396. Ocampo, José Antonio. “Latin America’s Growth and Equity Frustrations during Structural Reforms.” The Journal of Economic Perspectives 18, no. 2 (2004): 67–88. Haggard, Stephan. The Political Economy of the Asian Financial Crisis. Institute of International Economics, 2000. ISBN: 9780881322835. [Preview with Google Books] Harvey, David. A Brief History of Neoliberalism. Oxford University Press, 2005. ISBN: 9780199283262. [Preview with Google Books] 11 Different Views on Why and How Institutions Matter for Development Required Readings Acemoglu, Daron, and James A. Robinson. Chapter 15 in Why Nations Fail: The Origins of Power, Prosperity, and Poverty. Vol. 4. Crown Business, 2012. ISBN: 9780307719218. Weiss, Thomas G. “Governance, Good Governance and Global Governance: Conceptual and Actual Challenges.” Third World Quarterly 21, no. 5 (2000): 795–814. Optional Readings Rodrik, Dani. “Institutions for High-quality Growth: What They Are and How to Acquire Them.” Studies in Comparative International Development 35, no. 3 (2000): 3–31. North, Douglass C. Institutions, Institutional Change and Economic Performance. Cambridge University Press, 1990. ISBN: 9780521397346. [Preview with Google Books] Romer, Paul M. “The Origins of Endogenous Growth.” The Journal of Economic Perspectives 8, no. 1 (1994): 3–22. Nabli, Mustapha, and Jeffrey Nugent. “The New Institutional Economics and its Applicability to Development.” World Development 17, no. 9 (1989): 1333–47. Sabel, Charles. “Learning by Monitoring: The Institutions of Economic Development.” In Rethinking the Development Experience ; Essays Provoked by the Work of Albert O. Hirschman. Edited by L. Rodwin and D. Schon. Brookings Institution Press, 1994, pp. 231–74. ISBN: 9780815775515. Woolcock, Michael, and Deepa Narayan. “Social Capital: Implications for Development Theory, Research, and Policy.” The World Bank Research Observer 15, no. 2 (2000): 225–49. Woolcock, Michael. “Social Capital and Economic Development: Toward a Theoretical Synthesis and Policy Framework.” Theory and Society 27, no. 2 (1998): 151–208. Pack, Howard. “Endogenous Growth Theory: Intellectual Appeal and Empirical Shortcomings.” Journal of Economic Perspectives 8, no. 1 (1994): 55–72. Acemoglu, Daron, Simon Johnson, and James Robinson. “Institutions as a Fundamental Cause of Long-Run Growth.” In Handbook of Economic Growth. Edited by Philippe Aghion and Steven Durlauf. North Holland, 2006, pp. 388–421. ISBN: 9780444520418. (Read First 4 Sections) Craig, David, and Doug Porter. “Poverty Reduction Strategy Papers: A New Convergence.” World Development 31, no. 1 (2003): 53–69. Elkins, Meg, and Simon Feeny. “Policies in Poverty Reduction Strategy Papers: Dominance or Diversity?” Canadian Journal of Development Studies / Revue canadienne d’études du développement 35, no. 2 (2014): 1–21. (Ahead-of-Print) 12 Continuous Development: Recent Challenges of Transition for High, Medium, and Low Income Countries Required Readings Lin, Justin, and Ha-Joon Chang “Should Industrial Policy in Developing Countries Conform to Comparative Advantage or Defy It? A Debate Between Justin Lin and Ha-joon Chang.” Development Policy Review 27, no. 5 (2009): 483–502. Zeng, Jin, and Yuanyuan Fang. “Between Poverty and Prosperity: China’s Dependent Development and the ‘Middle-income Trap’.” Third World Quarterly 35, no. 6 (2014): 1014–31. Optional Readings Rodrik, Dani. Chapter 4 in One Economics, Many Recipes: Globalization, Institutions, and Economic Growth. Princeton University Press, 2007. ISBN: 9780691129518. Juma, Calestous. “Complexity, Innovation, and Development: Schumpeter Revisited.” Journal of Policy and Complex Systems 1, no. 1 (2014): 4–21. Griffith, Breda. “Middle-income Trap.” Frontiers in Development Policy, 39 (2011): 39–43. Palma, José Gabriel. “Why Has Productivity Growth Stagnated in Most Latin-American Countries Since the Neo-liberal Reforms?” University of Cambridge, Faculty of Economics, 2011. Berger, Suzanne. Making in America: From Innovation to Market. MIT Press, 2013. ISBN: 9780262019910. [Preview with Google Books] Paus, Eva. “Confronting the Middle Income Trap: Insights from Small Latecomers.” Studies in Comparative International Development 47, no. 2 (2012): 115–38. World Bank. “Escaping the Middle-income Trap.” World Bank East Asia and Pacific Economic Update 2, 2010. Kharas, Homi, and Harinder Kohli. “What is the Middle Income Trap, Why Do Countries Fall Into it, and How Can it be Avoided?” Global Journal of Emerging Market Economies 3, no. 3 (2011): 281–89. Sumner, Andy, and Meera Tiwari. “Global Poverty Reduction to 2015 and Beyond: What has been the Impact of the MDGs and what are the Options for a Post 2015 Global Framework?” Institute of Development Studies Working Papers 348 (2010): 01–31. Unit 3: The Old International Aid Architecture and the New Development Context 13 International Development Across Scales: The Role of Organizations Linking a Complex Global System and the Implementation of Actual Interventions Optional Readings Fischer, Andrew M. “Putting Aid in its Place: Insights from Early Structuralists on Aid and Balance of Payments and Lessons for Contemporary Aid Debates.” Journal of International Development 21, no. 6 (2009): 856–67. Bagwell, Kyle, and Staiger, Robert W. “Can the Doha Round Be a Development Round?” In Globalization in An Age of Crisis: Multilateral Economic Cooperation in the Twenty-first Century. University Of Chicago Press, 2011. ISBN: 9780226030753. [Preview with Google Books] Bellmann, Christophe, and Miguel Rodriguez Mendoza. “The Future and the WTO: Confronting the Challenges, A Collection of Short Essays.” (PDF) Edited by R. Meléndez-Ortiz. The International Centre for Trade and Sustainable Development (ICTSD), 2012. Stiglitz, Joseph E., and Andrew Charlton. The Right to Trade: Rethinking the Aid for Trade Agenda. Commonwealth Secretariat, 2013. ISBN: 9781849291057. [Preview with Google Books] Janský, Petr. “Illicit Financial Flows and The 2013 Commitment to Development Index.” Center for Global Development, 2013. Shaxson, Nicholas, and John Christensen. “The Finance Curse: How Oversized Financial Sectors Attack Democracy and Corrupt Economics.” (PDF - 1.9MB) 2013. Harvey, David. The Enigma of Capital: And The Crises of Capitalism. Oxford university Press, 2011. ISBN: 9780199836840. Piketty, Thomas. Capital in the Twenty-first Century. Belknap Press, 2014. ISBN: 9780674430006. 14 An Evolutionary Account of the Bretton Woods System Required Readings Power, Samantha. Chasing the Flame: One Man’s Fight to Save the World. Penguin Books, 2008. ISBN: 9780143114857. (Selected Notes) Optional Readings Mazower, Mark. Governing the World: The History of an Idea, 1815 to the Present. Penguin Press HC, 2013. ISBN: 9780143123941. (Selected Notes) Fukuda-Parr, Sakiko, and David Hulme. “International norm Dynamics and The “End of Poverty”: Understanding the Millennium Development Goals.” Global Governance: A Review of Multilateralism and International Organizations 17, no. 1 (2011): 17–36. Woods, Ngaire. The Globalizers: The IMF, the World Bank, and their Borrowers. Cornell University Press, 2006. ISBN: 9780801444241. [Preview with Google Books] Sachs, Jeffrey. The End of Poverty: Economic Possibilities for Our Time. Penguin Press, 2005. ISBN: 9781594200458. Collier, Paul. The Bottom Billion: Why the Poorest Countries Are Failing and What Can Be Done about It. Oxford University Press, 2007. ISBN: 9780195311457. [Preview with Google Books] Lundsgaarde, Erik. The Domestic Politics of Foreign Aid. Vol. 1. Routledge, 2012. ISBN: 9780415656955. [Preview with Google Books] Easterly, William, and W. R. Easterly. The White Man’s Burden: Why the West’s Efforts to Aid the Rest Have Done So Much Ill and So Little Good. Penguin Press HC, 2006. ISBN: 9781594200373. [Preview with Google Books] Moyo, Dambisa. Dead Aid: Why Aid is Not Working and How There is A Better Way for Africa. Farrar, Straus and Giroux, 2009. ISBN: 9780374139568. Sachs, Jeffrey D., and J. W. McArthur. “The Millennium Project: A Plan for Meeting the Millennium Development Goals.” The Lancet 365, no. 9456 (2005): 347–53. Heeks, Richard. “From the MDGs to the Post-2015 Agenda: Analyzing Changing Development Priorities.” University of Manchester, Institute for Development Policy and Management, SEED, Centre for Development Informatics, Working Paper No. 56 (2014): 50. 15 “Good Government in the Tropics” and South-South Cooperation Required Readings Brautigam, Deborah. Chapter 6 in The Dragon’s Gift: The Real Story of China in Africa. Oxford University Press, pp. 139–40, 2010. ISBN: 9780199550227. Pritchett, Lant. “Can Rich Countries be Reliable Partners for National Development?” Center for Global Development, 2015. Tendler, Judith. Chapter 6 in Good Government in the Tropics. Johns Hopkins University Press, 1997. ISBN: 9780801854521. Optional Readings Quadir, Fahimul. “Rising Donors and the New Narrative of ‘South–south’ Cooperation: What Prospects for Changing the Landscape of Development Assistance Programmes?” Third World Quarterly 34, no. 2 (2013): 321–38. Roll, Michael, ed. The Politics of Public Sector Performance: Pockets of Effectiveness in Developing Countries. Routledge, 2015. ISBN: 9781138956391. Levy, Brian. “Can Islands of Effectiveness Thrive in Difficult Governance Settings? The Political Economy of Local-level Collaborative Governance.” 2011. Crook, Richard C. “Rethinking Civil Service Reform in Africa:‘Islands of Effectiveness’ and Organisational Commitment.” Commonwealth and Comparative Politics 48, no. 4 (2010): 479–504. Kaplinsky, Raphael. “What Contribution Can China Make to Inclusive Growth in Sub Saharan Africa?” Development and Change 44, no. 6 (2013): 1295–316. Brautigam, Deborah. The Dragon’s Gift: The Real Story of China in Africa. Oxford University Press, 2010. ISBN: 9780199606290. 16 The Rise of NGOs and Foundations Required Readings Schuller, Mark. Chapter 5 in Killing with Kindness: Haiti, International Aid, and NGOs. Rutgers University Press, 2012. ISBN: 9780813553634. [Preview with Google Books] Bishop, Matthew, and Michael Green. Chapters 1 and 15 in Philanthrocapitalism: How Giving Can Save the World. A & C Black Publisher Limited, 2010. ISBN: 9781408121580. [Preview with Google Books] Optional Readings Sanyal, Bishwapriya. “The Myth of Development From Below.” (PDF) Mimeo Department of Urban Studies and Planning, Massachusetts Institute of Technology, 1996. Tendler, Judith. “What Ever Happened to Poverty Alleviation?” World Development 17, no. 7 (1989): 1033–44. Powell, Walter. W., and Steinberg, Richard, eds. The Nonprofit Sector: A Research Handbook. Yale University Press, 2006. ISBN: 9780300109030. [Preview with Google Books] Fowler, Alan, ed. Striking a Balance: A Guide to Enhancing the Effectiveness of Non-governmental Organisations in International Development. Routledge, 2009. ISBN: 9781853833250. Sanyal, Bishwapriya. “Cooperative Autonomy: The Dialectic of State-NGP Relationship in Developing Countries.” (PDF) International Institute of Labor Studies: Geneva, 1994. Fayolle, Alain, and H. Matlay, eds. Handbook of Research on Social Entrepreneurship. Edward Elgar Publishing, 2010. ISBN: 9781848440968. Yunus, Muhammad. Creating A World Without Poverty: Social Business and the Future of Capitalism. PublicAffairs, 2009. ISBN: 9781586486679. 17 The Newer Role of the Private Sector in Development: Collaborative Capitalism Required Readings Polak, Paul, and Mal Warwick. The Business Solution to Poverty: Designing Products and Services for Three Billion New Customers. Berrett-Koehler Publishers, 2013, pp. 1–34. ISBN: 9781609940775. [Preview with Google Books] Sandel, Michael. “What Isn’t for Sale?” The Atlantic, February 2012. Schiller, Amy. “Is For-Profit the Future of Non-profit?” The Atlantic, May 2014. Optional Readings Schwittay, Anke. “The Marketization of Poverty: with CA comment by Krista Badiane and David Berdish.” Current Anthropology 52, no. S3 (2011). Locke, Richard M. The Promise and Limits of Private Power: Promoting Labor Standards in a Global Economy. Cambridge University Press, 2013. ISBN: 9781107670884. Clark, Cathy, Jed Emerson, et al. “Collaborative Capitalism and the Rise of Impact Investing.” John Wiley & Sons, 2014. Prahalad, C. K. The Fortune at the Bottom of the Pyramid, Revised and Updated 5th Anniversary Edition: Eradicating Poverty Through Profits. Pearson FT Press, 2009. ISBN: 9780133829136. Khanna, Tarun. Billions of Entrepreneurs: How China and India are Reshaping their Futures and Yours. Harvard Business Review Press, 2011. ISBN: 9781422157282. London, Ted, and Stuart L. Hart. Next Generation Business Strategies for the Base of the Pyramid: New Approaches for Building Mutual Value. FT Press, 2010. ISBN: 9780137047895. [Preview with Google Books] Radjou, Navi, Jaideep Prabhu, and Simone Ahuja. Jugaad Innovation: Think Frugal, Be Flexible, Generate Breakthrough Growth. Jossey-Bass, 2012. ISBN: 9781118249741. [Preview with Google Books] Ilahiane, Hsain, and John W. Sherry. “The Problematics of the “Bottom of the Pyramid” Approach to International Development: The Case of Micro-entrepreneurs’ Use of Mobile Phones in Morocco.” Information Technologies and International Development 8, no. 1 (2012): 13. Arnold, Denis G., and Laura H. D. Williams. “The Paradox at the Base of the Pyramid: Environmental Sustainability and Market-based Poverty Alleviation.” International Journal of Technology Management 60, no. 1 (2012): 44–59. Unit 4: Connecting Developing Theory and Practice: First-hand Accounts on How Development is Practice in Different Sectors 18 Development through the Private Sector Required Readings Shavin, Naomi. “Big Pharma Is Making Progress in Finding an Ebola Vaccine, But They May Be Fighting The Wrong Battle.” The New Republic, January 2015. Surowiecki, James. “Ebolanomics.” The New Yorker, August 2014. Lam, Bourree. “Vaccines Are Profitable, So What?” The Atlantic, February 2015. Farmer, Paul, ed. “Three Stories, Three Paradigms, and a Critique of Social Entrepeneurship.” In To Repair the World: Paul Farmer Speaks to the Next Generation. Vol. 29. University of California Press, 2013. ISBN: 9780520275973. [Preview with Google Books] 19 Development through Government Initiatives Required Readings da Silva, José Graziano, et al, eds. “The Fome Zero (Zero Hunger) Program: The Brazilian Experience.” Ministry of Agrarian Development, 2011. (Selected Notes) Levy, Santiago. Progress Against Poverty: Sustaining Mexico’s Progresa-oportunidades Program. Brookings Institutional Press, 2006. ISBN: 9780815752219. (Selected Notes) Optional Readings Ansell, Aaron. Chapter 4 in Zero Hunger: Political Culture and Antipoverty Policy in Northeast Brazil. The University of North Carolina Press, 2014. ISBN: 9781469613970. [Preview with Google Books] Program Institutional Design. “Series: Scaling Up Local Innovations for Transformational Change.” (PDF) Mexico: Scaling Up Progresa / Oportunidades-CCT’s. UNDP, November 2011. Levy, Santiago. Chapters 2 and 3 in Progress Against Poverty: Sustaining Mexico’s Progresa-Oportunidades Program. Brookings Institution, 2006. ISBN: 9780815752219. Hirschman, Albert O. Development Projects Observed. Brookings Institution Press, 1967. ISBN: 9780815736516. Batley, Richard. “The Politics of Service Delivery Reform.” Development and Change 35, no. 1 (2004): 31–56. Rasul, Imran, and Daniel Rogger. “Management of Bureaucrats and Public Service Delivery: Evidence from the Nigerian Civil Service.” (PDF - 1.2MB) Working Paper University College London, 2013. Adler, Daniel, Caroline Sage, et al. “Interim Institutions and the Development Process: Opening Spaces for Reform in Cambodia and Indonesia.” Brooks World Poverty Institute Working Paper No. 86, 2009. Sumner, Andy. “Global Poverty and the New Bottom Billion: What if Three-quarters of the World’s Poor Live in Middle-income Countries?” Institute of Development Studies Working Papers 349 (2010): 1–43. Andrews, Matt. The Limits of Institutional Reform in Development: Changing Rules for Realistic Solutions. Cambridge University Press, 2013. ISBN: 9781107016330. [Preview with Google Books] 20 Development by Fostering Complementarities across Sectors Required Readings Juma, Calestous. “Reinventing Africa’s Universities,” Al Jazeera, September 5, 2014. Lee, Keun, Calestous Juma, et al. “Innovation Capabilities for Sustainable Development in Africa.” Wider Working Paper No. 2014 / 062, 2014. Juma, Calestous. “Complexity, Innovation, and Development: Schumpeter Revisited.” Journal of Policy and Complex Systems 1, no. 1 (2014): 4–21. 21 Development through Research Required Readings Pearson, Ruth, and Cecile Jackson. “Interrogating Development: Feminism, Gender and Policy. (1998)” In The Globalization and Development Reader: Perspectives on Development and Global Change. Wiley-Blackwell, 2014, p. 191. ISBN: 9781118735107. [Preview with Google Books] Sen, Amartya. “More Than 100 Million Women Are Missing.” The New York Review of Books, December 1990. 22 Group Presentations – Part 1 None 23 Development through Non-profit Organizations None 24 Group Presentations – Part 2 None 25 International Development: From the Classroom to the Real World None
|
common_crawl_ocw.mit.edu_119
|
Course Meeting Times
Seminar: 1 session / week, 2 hours / session
Prerequisites
There are no prerequisites for this course.
Course Description
This is a seminar course that explores the history of selected features of the physical environment of urban America. Among the features considered are parks, cemeteries, tenements, suburbs, zoos, skyscrapers, department stores, supermarkets, and amusement parks. The course gives students experience in working with primary documentation sources through its selection of readings and class discussions. Students then have the opportunity to apply this experience by researching their own historical questions and writing a term paper.
Syllabus Archive
The following syllabi come from a variety of different terms. They illustrate the evolution of this course over time, and are intended to provide alternate views into the instruction of this course.
Fall 2011, Robert Fogelson (PDF)
Fall 2010, Robert Fogelson (PDF)
Fall 2009, Robert Fogelson (PDF)
Fall 2008, Robert Fogelson (PDF)
Fall 2004, Robert Fogelson (PDF)
|
common_crawl_ocw.mit.edu_120
|
Project Assignment 3: Your Site Over Time
© Sources Unknown. All rights reserved. This content is excluded from our Creative Commons license. For more information, see https://ocw.mit.edu/help/faq-fair-use/.
Framing Your Paper (PDF - 1.1MB)
This is the third part of a four-part, semester-long project. The first part consisted of finding a site; the second, to find evidence of its environmental history and ongoing natural processes. Now the task is to trace changes on your site over time by comparing its character at several points in time, using maps. You may find different kinds of changes: Land use, density of settlement, additions to buildings, ownership, transportation. The types of sources you will find helpful are historical maps, especially nineteenth and twentieth-century atlases, and may also include plans, prints, and photographs. The paper is due on class 17.
Start your investigation by locating your site on maps in several atlases of different dates. Include at least four different time periods in addition to the present, including at least one from the nineteenth century. By comparing your site at different times, you are likely to find that changes between some dates are more significant than others. Record the changes you think are important or interesting. You may want to modify your site slightly by shifting it a block or so to include interesting material that you have found or to make the site a bit larger or smaller. The site you end up with should contain four to eight blocks.
What changes do you find? How would you characterize them? Are the changes gradual or do they seem to happen suddenly? Do changes within a time period seem related? How about from one time to another? Can you find patterns in the changes? What might explain the changes you found? Were they merely an outcome of actions by individuals or do they reflect broader forces (social, cultural, political, economic, or natural processes and conditions at local, regional, national, or global scales; policies; events; technological changes)? Review Jackson’s Crabgrass Frontier for material to test, substantiate, or revise your hunches.
Describe what you have found, the causes you have identified, and your reasoning. The text should be equivalent to about 2400 words, accompanied by illustrations (don’t forget to cite the source of each illustration!). Focus on what seems most significant and interesting; look for patterns. Don’t try to cover everything. This is an assignment that could occupy you for an entire semester. The objective of the assignment is to give you a sense of how cities change over time, to prompt you to question why, and to search for answers.
Successful papers are well organized, cite specific examples to make each point, put examples in context, make reference to required texts, and are illustrated. In organizing your paper, focus on the patterns of change you found and the important issues they raise; consider using subheadings to highlight your key points. Choose your examples carefully. They should be specific and significant, illustrative of the patterns of change you found. Illustrations (copies of maps, prints, photographs) should be apt and clearly linked to your reasoning; quality is important, not quantity. Include a map identifying the boundaries of your site.
Start on this assignment right away and bring historical maps of your site to class workshops. The assignment requires finding your site on old maps before you can even begin to puzzle out the changes and their possible causes. Some maps are online, but you may want to augment those with other maps. Map collections often have their own hours and may not always be open when the rest of the library is. Leave yourself plenty of time.
It is important to include copies of the illustrations used to analyze the changes on your site. If you use the atlases on microfilm, copies are easily made. If you use bound atlases, which may not be reproduced on a copy machine, you may need to make drawn copies or photograph them.
Basic Requirements
- Compare your site at four different time periods (including at least three, detailed atlases) plus one from the present, at least one of which must be from the nineteenth century. You may adjust the boundaries of your site, but keep the size to 4–8 blocks, 10 blocks at most. Delineate your site boundaries on all maps.
- Describe and analyze specific changes on your site during the periods of the maps examined. Your paper must include a copy of the historical maps you used to track the changes on your site.
- Refer to the required reading to test, substantiate, or revise your hypotheses about the changes you observed on your site since its initial settlement and how, why and when they occurred. Your essay should explain how the concepts presented in the reading help to explain (and / or perhaps confound or complicate) your observations of mapped data.
- Cite all sources, including maps, fully and properly. Abide by principles of fair use for images.
What Is This Assignment Asking You to Do (and Not to Do)?
This assignment is asking you to use historical maps as a primary source of evidence for determining how, when, and why your site has changed over time. It requires “close reading” of those maps, using your own eyes and mind, in order to identify, analyze, and explain patterns of change that are observable on the maps. It further asks you to use the required reading, Jackson’s Crabgrass Frontier, to help explain your findings.
Note: This assignment does not ask you to describe the history of your site using secondary sources (e.g. texts on the history of Boston or Cambridge). Do not conduct secondary research at the expense of close observation of the maps themselves.
Start by Finding the Maps!
Maps, atlases, and surveys have been produced throughout the histories of American cities. Produced for various purposes, the offer invaluable information about a place. Fire-insurance Atlases, such as those produced by Bromley, Sanborn, and Hopkins, catalog the buildings and businesses that existed at a particular time and often show where the city may expand.
You will need to do considerable map research to write this paper. Maps, particularly nineteenth- and twentieth-century fire insurance atlases will be the basis for your observations about how and why your site has changed over time. Look for maps dating back as early as possible to fully understand the site’s development over time. Refer to maps at a larger scale, such as those in Krieger, Mapping Boston, in order to put your site in context.
Assemble your maps as soon as possible. They are the primary source for all your observations, so you cannot truly begin this assignment without them. Use the Map Guide (PDF) to begin your map research and for references to further resources. Use your journal to make initial observations of the maps and to try out some of your ideas. Be prepared to puzzle about or be surprised by what you find.
Tips for Using Fire Insurance Maps to Discover How Your Site Changed Over Time
Focus on the detailed fire-insurance maps (Hopkins from the 1870s, Bromleys, and Sanborns from the nineteenth- and twentieth centuries). Let them be your visual guide to your site’s history.
Gather more fire-insurance maps than you might need (more than the required four plus the present) in order to get a comprehensive overview of how your site has changed over time, then focus on the ones that reveal the most about the character of changes on your site.
Most maps have a legend. Find the legend in order to identify the significance of colors, symbols, or abbreviations. Consult the Guide to Sanborn Abbreviations (PDF).
Start with the earliest detailed map that you can find (Hopkins or Bromley may be the earliest nineteenth-century maps). Identify patterns of streets, types of buildings and land uses, ownership, size of properties, and transportation.
- Is there a predominant land use on your site (e.g. residential, commercial, industrial, institutional) or are the uses evenly mixed?
- Color land uses on the map as a way to help you identify patterns and anomalies and to ask and answer questions of the map. Use standard colors for the various land uses:
- Single-family residential: Yellow
- Multi-family residential: Orange or brown
- Commercial: Red
- Institutional: Blue
- Industry: Purple
- Transportation and utilities: Grey
- Parks and recreation: Green
- Are the different land uses unrelated or related, and if so, how?
- If there are residences, are they occupied by a single owner or are they rental apartments? Do the names of owners suggest that they belong to a particular ethnic group?
- Is there a pattern of ownership (many different owners, one or more owners of multiple properties; corporate or institutional ownership)?
- Are the sizes of properties similar or quite different?
- Browse the timelines on the class website and start reading Kenneth Jackson’s book, Crabgrass Frontier. This will give you ideas as you make observations on the maps.
After observing the earliest detailed map of your site that you can find, proceed to the next map, chronologically, and repeat the process, asking the same questions. Have there been changes? If so, what are they?
Compare maps of successive periods to identify changes and to determine which periods of the site’s development were most significant (during which period the initial settlement took place or when important changes happened). In comparing maps of different periods, use multiple approaches to discover patterns of change over time.
- Trace changes in streets.
- Look for changes in property boundaries and size of parcels to help decode how land use and social use changes from map to map.
- Look for changing land use: Residential to commercial, industrial to commercial, or from one type of commercial use to another. Has the predominant land use on your site (residential, commercial, industrial, institutional) changed?
- Look for how / when space is filled or emptied, when buildings appear, disappear, or are replaced.
- Look at names as reflective of change: Look at labels, such as names of owners, of churches, cemeteries, and commercial or institutional buildings on your site.
- Look for continuity: Few or no changes can also be significant. Why might part of the site stay the same over time? Does land use stay the same but demographics shift (look for names of churches, schools, hospitals, prisons, etc.)?
Another approach to exploring change is to use reverse chronology. Are there elements of your current site you are curious about and want to track back in time? For example: Start with a specific land use or feature of your site and ask, “How did that get that way?” This may be an anomaly or interesting feature that you drew attention to in the first assignment.
Finding Explanations for the Changes You Observed
Use the succession of maps to help create a chronology of changes and patterns of change. Identify types of change to help pose hypotheses about change over time.
What might explain the patterns of changes that you found? Were the changes the result of idiosyncratic decisions by individual property owners? Were the changes peculiar to your site or were they examples of local or national trends? Do they reflect broader forces, such as technological innovation in power, transportation, or communication? Do they reflect local, regional, or national policy and / or economic conditions? Do they reflect cultural changes, such as changes in fashion and ways of living?
Consult Crabgrass Frontier for ideas about how to explain the changes you found at particular times and how and why they were significant.
Formulate questions that remain unanswered. There will be an opportunity in class to try to find answers.
Starting to Write
You will need four kinds of material to write your paper:
- The maps themselves. Annotate the maps to point to significant features. Use the maps as an aid in writing; you should also integrate the maps into your paper.
- Your insights and observations, captured in notes and maps that you have annotated. Prioritize and organize the changes you have observed. Which seem most significant? Which are explained or complicated by the concepts discussed in class and in Crabgrass Frontier and events documented on the timelines?
- Concepts drawn from the required reading, specifically the social, economic, political, and cultural history presented in Crabgrass Frontier and in class, which help to explain or raise further questions about your site and the maps that depict it at different points in time.
- Notes on how your observations and hypotheses relate to larger issues of urban design, planning, and policy, in Boston and elsewhere, as discussed in class and in the required reading.
- Review your reading notes, class notes, and journal entries: What elements of these three sources of information are applicable to your site and your questions of it? Look for concepts in the reading and lectures that help you decode or read your site. Explore these in your journal entries.
Structuring Your Paper
- Your paper should have a thesis. Your thesis will directly answer the central question of this assignment: How have social, political, and economic processes shaped your site? What broader issues about how cities are shaped are raised by your findings? Your thesis should aim to explore the implications and significance of your findings.
- Provide specific evidence, in the form of examples, to support your thesis.
- Explain, support, and develop your thesis by applying concepts from Crabgrass Frontier and from class. It’s important that the concepts and ideas you draw from the reading illuminate your site and are chosen with a purpose. Before you start to write, you will have decided what concepts help you read your site and why (perhaps in a journal entry).
- Organize your paper so that it explains your thesis and your significant findings in a logical and readable sequence of paragraphs. You could consider tracing from the present back in time or beginning with your earliest map and tracing the site’s features forward. Much depends on the particular qualities of your site.
- Consider using chronology to help organize your paper. To do this you’ll need to understand and analyze historical periods, which will help establish the context for important changes on your site.
- Consider organizing your paper around what has changed and what has stayed the same in your site, in terms of its build environment (such as changes to streets, property boundaries, buildings, and land use) and ownership (as depicted on maps), or which changes have had a catalytic effect (i.e. triggered much subsequent developmental change to your site).
- Consider using subheadings to help organize your draft, and potentially keep them for the paper and website presentation of your work. Subheadings whelp you when you are writing to be clear about what you are describing and arguing; in the final work it helps the reader follow your line of interpretation.
|
common_crawl_ocw.mit.edu_121
|
Project Assignment 4: Artifacts, Layers, Traces, and Trends
Guide to Architectural Styles (PDF - 3.0MB)
This is the fourth assignment in a four-part, semester-long project. The task of the third assignment was to trace changes on your site over time using old maps, plans, prints, and photographs. Now the objective is to find traces of these changes present in the current environment and to interpret their significance. Many of you were attracted to your site because of some anomalous features that puzzled you and made you wonder why they were there and what had caused them to be. This is an opportunity to explore some answers to such puzzles.
Take a walk through your site looking for clues to the past and to what the future may hold. You will find it helpful to refer to the old maps you analyzed for the third assignment (and old prints and photographs if you have them). Walk through the site several times, once for each period for which you have a map, and compare the site today with what it was like at the time depicted on the map. This will be easier than trying to compare three or four maps from different time periods all at once. Look also for traces of past populations. Make notes on what you see. What different kinds of traces can you find and what period of the site’s history do they belong to? Do they relate to one another in any way? Describe the traces you think are most important or interesting. What do they reveal about the past and the present? Why did they survive? Are they still fulfilling some original purpose? Do they reveal anything about the present and / or future? What additional clues can you find in the present that hint at potential trends for the future?
Describe what you have found. The paper should be about 2400 words, accompanied by illustrations (don’t forget to include links to the maps from the third assignment). Focus on what seems most significant or interesting to you. Look for patterns. Don’t try to mention every trace of the past you find or every clue to a trend. This paper brings together the sum of your knowledge and observations about your site, and it will draw on your historical, topographical, and environmental knowledge of it. The objective of this assignment is to give you an appreciation for how past owners, functions, events, and ways of life have left traces on your site and, based on this understanding, to give you the opportunity to speculate on how the site may develop in the future.
Illustrate some of the artifacts, layers, traces, and trends that you found. These illustrations may include old maps, photographs, and prints, but should also include some drawings or photographs of what you saw and found significant. Do not feel intimidated if you doubt your artistic skills. The object is to record what you see and highlight what is significant about it. The illustrations will be graded on quality of content; your grade will not be reduced for lack of artistic skill. Illustrations are another way of recording and thinking about your observations. Organize the illustrations and present them neatly. Be selective: Quality is more important than quantity. Do not use dozens of photographs, hoping a few will hit the mark.
Successful papers are well organized, cite specific examples to make each point, put examples in context, and are illustrated. In organizing your paper, focus on the artifacts, layers, traces, and trends that you found, the important issues they raise, and patterns they illustrate; consider using subheadings to highlight your key points. Choose your examples carefully. They should illustrate the issues and patterns you identified as important in your site. Illustrations should be apt and clearly linked to your reasoning. Include a map identifying the boundaries of your site. Do not forget to cite the source of each illustration.
Basic Requirements
- Describe and analyze artifacts, traces, and layers (if present) from various time periods observed on your site and on historic maps. Describe and analyze signs that portend potential trends on your site.
- Include photographs, drawings, and / or maps to document what you discovered. Delineate your site boundaries on all maps.
- Refer to the required reading to test, substantiate, or revise your hypotheses about the significance of the artifacts, traces, layers, and trends you observed. Your essay should explain how the concepts presented in the reading help to explain your observations.
- Present a thesis about the significance of the traces and trends you observed and what they reveal about your site.
- Cite all sources, including maps, fully and properly.
- The paper should be about 2400 words (approximately eight pages, typed double-spaced), accompanied by illustrations. This paper is due by class 21.
Orienting Yourself to the Assignment
Many of you were attracted to your site because of some anomalous features that puzzled you and made you wonder why they were there and what had caused them to be. This is an opportunity to explore some answers to such puzzles. Keep in mind that this assignment requires two sorts of reflection, the first, from what you know so far, and the second, from observing the site for visual traces of the layers of knowledge you’ve acquired about your site.
Plan for Site Visits with Maps
You’ll want to make more than one site visit: First to track the current site against each period for which you have a map; second, to revisit the site a day or two later to observe further. Start by taking a succession of walks through the site with each map of a particular time (in chronological order). Then identify places where there are interesting juxtapositions of multiple time periods and make a series of stops to compare those locations during successive periods, using the method that we employed on the field trip. In other words, plan to walk the site several times, once for each map (in order to discern layers or strata related to a particular period) and once to compare all time periods.
Keep in mind that you’re likely to encounter artifacts of individual moments in time, but also layers of artifacts or traces that might very well serve as the embodiment of or as an emblem of the changes to the site over time.
Record field notes for each site walk / map walk (and consider using your notes as the basis for your journal).
Questions to Ask
- What different kinds of artifacts and traces can you find and what period of the site’s history do they belong to?
- Do they relate to one another in any way?
- Which artifacts and traces do you think are most important or interesting, and why? Describe them in your field notes and document with photographs.
- What do you think these artifacts and traces reveal about the past and the present?
- Why did they survive?
- Are they still fulfilling some original purpose (or have they been entirely adapted to new uses)?
- Do they reveal anything about the present and / or future?
- What additional clues can you find in the present that hint at potential trends for the future?
Documenting Your Discoveries
- You must gather visual evidence from the site in the form of your own photographs or drawings of visual artifacts and traces of the past that you observe on your site and noting how they relate to the history of the site and / or to its possible future.
- Consider your visual evidence with a curatorial eye—be selective and make conscious choices about your illustrations (quality is more important than quantity). Consider how the visual is a vehicle for highlighting what you think are significant artifacts, traces, layers, and trends on your site. As with other papers, you’ll need to be selective, that is, curating your visual evidence so that it focuses on what is most significant about how the site reveals the clues to the past.
- Consider including drawings as well as photographs and maps. Do not feel intimidated if you doubt your artistic skills. The object is to record what you see and highlight what is significant about it. The illustrations will be graded on quality of content; your grade will not be reduced for lack of artistic skill. Illustrations are another way of recording and thinking about your observations.
Organizing and Drafting the Paper
- After you’ve done your site visits, you will need to organize your observations and evidence to focus on what you think is most significant about your site. Remember that this might be in the form of continuity or change, and that traces might be small (a cornerstone) or big (streets and buildings that have remained in place).
- You’ll want to choose specific artifacts, traces, or layers that you see on the site. Keep in mind as well the dynamic of anomaly versus pattern of common features that has been a theme of the course since the start. Finally, keep an eye out for singular artifacts versus layers of artifacts from a similar period and consider the stories these layers might tell.
- Once you’ve curated and selected the focus of your visual evidence, you’re ready to write your draft. A few things to keep in mind:
- Organize the paper around your visual evidence and close analysis of it. Remember to analyze your visual evidence in the body of the paper, and also to Make the captions do real intellectual work for the paper as well, by commenting on what they illustrate and by citing their sources fully.
- Start with an introduction that speaks to your topographical, environmental, and historical knowledge of the site so far and puts this last assignment in conversations with the previous ones.
- Consider using subheadings to organize the body of the paper, both as you write it, and for your reading audience.
- Be sure to cite properly—this means citing any sources you paraphrase, including class lectures, and providing full citations in your captions.
Refining Your Website
- The clarity and graphic quality of presentation on your website has been given more weight as the semester has progressed. Post an image on the homepage of your website if you have not already done so.
|
common_crawl_ocw.mit.edu_122
|
What is your town’s Mitigation Plan?
Decide on a town to research. We prefer that you use your hometown, if possible. As someone from the town, you will better understand town dynamics, town threats, town government, and maybe even town politics.
Find a copy of your town’s mitigation plan, if there is one, and analyze the plan.
Refer to APA FEMA Hazard Mitigation: Integrating Best Practices into Planning Chapter 2, page 19, which discusses the problems with town mitigation plans. In this section are specific criticisms of these types of plans.
Refer also to Drabek’s “Managing the Emergency Response” where he reviews town responses to a variety of disasters.
- Provide a short background analysis of your town’s location, population level, key industries, etc. (You might make use of census data and maps for this section of your report).
- Describe the mitigation plan:
- What possible threats has the town/city identified?
- What natural hazards and man-made hazards is the town preparing for in the mitigation plan?
- Are there warning systems included in the plans?
- Is there an emergency operations center?
- Are there community disaster exercises?
- What communication plans has the town created?
- Who is in charge when an emergency happens?
- Who does the pre-planning before a disaster happens?
- Who does the post-disaster planning after a disaster?
- Analyze the Plan:
After you have described the key points that the plan, put on your analysis hat. Does this plan seem to be a viable plan to follow during an emergency? Explain. Does this plan create a process for handling an emergency? Is this plan a product that sits on a shelf?
Think about the cycle of disaster that we have discussed in class: Mitigation➔Preparedness➔Response➔Recovery. Can you identify steps of emergency planning in your town’s mitigation plan? Does the mitigation plan recognize and touch on each aspect of the cycle of disasters? Explain.
Your memo should be no more than 5 pages, single-spaced, excluding tables, charts, and graphics.
A draft version of your memo is due Session 6. It’s OK if your draft exceeds the page count – more to work with!
The final version of the memo in proper formatting should be uploaded to the class website by Session 8.
|
common_crawl_ocw.mit.edu_123
|
Global Cityscope is divided into four modules that correspond to the four traditional phases of disaster management (sometimes called emergency management).
Module 1 (Weeks 2–4): Disaster Mitigation
Topics
U.S. Disaster Policies: History and Institutions
Mitigation Planning and Policy Strategies: Local, State, and Federal (First draft of Disaster Memo due session #6)
Measuring and Mapping Vulnerability (First draft of Disaster Mitigation Memo due session #8)
Module 2 (Weeks 5–8): Preparedness and Planning
Topics
Social and Economic Vulnerabilities
Community Resilience
Emergency Management Planning
Communication and Risk Management (Policies and Plans)
Week 7: Workshop in Valparaíso, Chile
Module 3 (Weeks 9–11): Disaster Response
Topics
Emergency Planning
Supporting Emergency Response Operations
Coordination and Collaboration in Emergency Response Planning and Management
Module 4 (Weeks 12–13): Disaster Recovery and Rebuilding
Topics
Recovery Time Frames and Differential Recovery Rates
Long-Term Recovery
Post-Disaster Recovery Planning and Reconstruction
Post-Disaster Housing Planning and Land Readjustment
|
common_crawl_ocw.mit.edu_124
|
Straw Towers
- Building Straw Towers presentation
- Straw Towers and Learning Environments summary (PDF)
- Towers will be judged on: originality, hurricane resistance (using a fan), and height (to the top)
- Straw Towers Results
The results of the Straw Towers activity are written on the board. (Image courtesy of Eric Klopfer and Wendy Huang.)
Pulleys (PDF)
Technology in Education Poster (PDF)
Town Hall Debate (PDF)
Technology Centered Environments
Based on your in-class experiences with educational technologies, what you have learned from your research and what you have read about educational technologies, you will individually write a five page paper on whether (or not) creating technology-centered environments should be considered as a broad educational goal. Your paper will be graded on the following criteria:
- Ability to integrate your experiences with educational theory and research.
- Evidence to support your perspective on technology-centered environments.
- Coherence and consistency writing.
Math Games
Final Portfolio (PDF)
|
common_crawl_ocw.mit.edu_125
|
In this section, Prof. Klopfer outlines his constructivist approach to teaching and explains the details of how students learn from experience in a video interview.
Enabling Students to Learn Through Experience
I use a constructivist approach in my classes; I believe that learning happens not from me telling things to students, but from them experiencing things. By incorporating large amounts of discussion- and activity-based time into this class, I aimed to provide opportunities for my students to learn through experience.
Two prime examples of the constructivist approach in action are the current events assignment (PDF) and the chapter readings assignment (PDF), which are described in more detail here. Another example is the modes of teaching activity, in which students learned about and experienced various educational strategies by teaching their own lessons to their classmates.
In this video, Professor Eric Klopfer expands on his use of the constructivist approach in teaching the MIT course 11.124 Introduction to Education: Looking Forward and Looking Back on Education.
|
common_crawl_ocw.mit.edu_126
|
In this section, Prof. Klopfer describes two recurring course assignments that integrated online and in-class activities in complementary ways. In the last section, two students from the class share their thoughts on these activities.
For this class, we created two recurring assignments that integrated online and in-class activities in complementary ways. For both of these activities, students took turns being discussion leaders in groups of 2-3. The discussion leaders enabled and facilitated discussions online. These online discussions made it possible to start valuable conversations before class met instead of being constrained to limited class time. In the online forum, all students were able to share their opinions, and the online discussions and questions helped student discussion leaders structure the subsequent classroom activities. Below, I explain these two activities, current events and chapter readings, and the complementary roles of the online and in-class parts.
The Current Events Assignment
For the current events assignment (PDF), one group of 2-3 students presented a current events article each week. At the beginning of each week, the presenting group selected an article, posted it online, and posed a few related questions to the rest of the class through a Moodle forum. All students then discussed the questions online. Subsequently, the student presenters built upon the online discussion by moderating a discussion or running an activity in class. The in-class component took roughly 30 minutes per week, or about one-ninth of class time.
In this video, Professor Eric Klopfer discusses the current events assignment from his MIT class, 11.124 Introduction to Education. He focuses on why he introduced the assignment and how the online and in-class components complemented each other.
The Chapter Readings Assignment
Similar to the current events assignment, the chapter readings assignment (PDF) required groups of 2-3 students to each read one chapter of a selected book and post a summary to a Moodle wiki. Other students were then expected to read the chapter summary and engage in a conversation around it. In the subsequent class, the student presenters facilitated a discussion that built upon the online conversation. The groups of 2-3 students cycled through so that every student had the opportunity to summarize a chapter and jointly lead an in-class discussion. Over the course of the semester, an online summary of the whole book was built chapter-by-chapter on the Wiki. We did not require that students read the whole book; however, the students did need to read the summary for a sense of the book’s important issues so they could engage in conversations about the book’s themes. As with the current events assignment, the in-class component took roughly 30 minutes per week, or about one-ninth of class time.
Each year when I choose the books for the chapter readings, I like to have one that brings in a historical context and one that brings in a current context. It’s a matter of thinking about things that are timely. I like to vary the books every few years, so I may switch to a new book after I’ve heard different students’ perspectives on a book’s themes for a couple years.
Students’ Thoughts on the Online Activities
In this video, two students share their thoughts on the online component of the course’s current events and chapter readings assignments.
|
common_crawl_ocw.mit.edu_127
|
Course Description
This course is designed to prepare you for a successful student teaching experience. Some of the major themes and activities are: analysis of yourself as a teacher and as a learner, subject knowledge, adolescent development, student learning styles, lesson planning, assessment strategies, classroom management …
This course is designed to prepare you for a successful student teaching experience. Some of the major themes and activities are: analysis of yourself as a teacher and as a learner, subject knowledge, adolescent development, student learning styles, lesson planning, assessment strategies, classroom management techniques and differentiated instruction. The course requires significant personal involvement and time. You will observe high school classes, begin to pursue a more active role in the classroom in the latter part of the semester, do reflective writings on what you see and think (journal), design and teach a mini-lesson, design a major curriculum unit and engage in our classroom discussions and activities.
|
common_crawl_ocw.mit.edu_128
|
[Skillful] = Saphier, Jon, Mary Ann Haley-Speca, and Robert Gower. The Skillful Teacher: Building Your Teaching Skills. Research for Better Teaching, 2008. ISBN: 9781886822108. [Tools] = Jones, Frederic H., Patrick Jones, et al. Fred Jones Tools for Teaching: Discipline, Instructions, Motivation. Fredic H. Jones & Associates, 2007. ISBN: 9780965026321. [Preview with Google Books] [Champion] = Lemov, Doug. Teach Like a Champion: 49 Techniques that Put Students on the Path to College. Jossey-Bass, 2010. ISBN: 9780470550472. [Preview with Google Books] SES # TOPICS READINGS 1 Introduction / Setting Academic Expectations “Expectations.” Chapter 12 in [Skillfull]. “Expectations.” Chapter 1 in [Champion]. 2 Creating a Strong Classroom Climate “Personal Relationship Building.” Chapter 13 in [Skillfull]. “Classroom Climate.” Chapter 14 in [Skillfull]. “Strong Classroom Culture.” Chapter 5 in [Champion]. 3 Creating Independent Learners Chapters 2 and 3 in [Champion]. Sections 1 and 3 (Chapters 1–4) in [Tools]. 4 Special Education Seminar I: Introduction No readings 5 Creating Classroom Structure Chapters 2 and 3 in [Champion]. Section 3 (Chapters 5–8) in [Tools]. 6 Creating Classroom Routines “Discipline.” Chapter 8 in [Skillfull]. Section 4 (Chapters 9–10) in [Tools]. 7 Setting Classroom Behavioral Expectations Chapter 7 and 8 in [Champion]. Section 5 (Chapters 11–12) in [Tools]. Esquith, Rafe. Teach Like Your Hair’s on Fire: The Methods and Madness Inside Room 56. Viking Adult, 2007. ISBN: 9780670038152. 8 Maintaining Classroom Discipline “Clarity.” Chapter 9 in [Skillfull]. Section 6 (Chapters 13–16) in [Tools]. 9 Curriculum Design: Part I “Principles of Learning.” Chapter 10 in [Skillfull]. Section 6 (Chapters 17–19) in [Tools]. 10 Curriculum Design Practice: Part II “Models of Teaching.” Chapter 11 in [Skillfull]. Section 7 (Chapters 20–23) in [Tools]. Chapters 4 and 9 in [Champion]. 11 Special Education Seminar II Perspective of Special Education Teacher “Learning Experiences.” Chapter 18 in [Skillful]. Section 8 (Chapters 24–25) in [Tools]. 12 Assessment: Introduction No readings 13 Assessment: Design “Assessment.” Chapter 19 in [Skillfull]. 14 Standardized Testing MCAS Design and Student Preparation “Curriculum Design.” Chapter 15 in [Skillfull]. 15 Classroom Management: Ses#1 “Overarching Objectives.” Chapter 20 in [Skillfull]. Other readings for these sessions were taken from: Wiggins, Grant and Jay McTighe. Understanding by Design. Prentice Hall, 2005. ISBN: 9780131950849. Greene, Ross W. Lost at School: Why Our Kids with Behavioral Challenges are Falling Through the Cracks and How We Can Help Them. Scribner, 2009. ISBN: 9781416572275 16 Classroom Management: Ses #2 17 Classroom Management: Ses #3 18 Classroom Management: Ses #4 19 Classroom Management: Ses #5 20 Classroom Management: Ses #6 21 Common Core Standards and Implications for MCAS Testing “Massachusetts Curriculum Frameworks.” Massachusetts Department of Elementary and Secondary Education website. February 22, 2011. “Common Core State Standards Initiative: Massachusetts Side-by-Side Comparison Documents” (DOC) Massachusetts Department of Elementary and Secondary Education website. May 4, 2012. Rothman, Robert. “Five Myths about the Common Core State Standards.” Harvard Education Letter 27, no. 5 (2011). 22 Special Education Seminar III Teaching Practice in the Science Classroom No readings 23 Best Practice—Differentiated Instruction Introduction No readings 24 Best Practice—Differentiated Instruction Applied to Indepdendent Activities Period (IAP) Class No readings 25 Preparation for IAP Teaching No readings 26 Preparation for IAP Teaching (cont.) No readings
|
common_crawl_ocw.mit.edu_129
|
Weekly Film Notes
Students are required to prepare and submit notes on each film following each film screening. These notes are designed to ensure that students are watching films attentively with an active mind and to generate ideas for papers and class discussions.
Paper 1: Observing City Scenes
For the first paper, students should select, observe, and describe a single “scene” from the life of the city around them.
Paper 2: Close Reading
For the second paper, students should build on class discussions and conduct a close-reading of just two scenes or sequences to explore what they say about the experiences of living in cities. Although students have free choice to explore any two scenes you choose from the first six films in the course, students should ground their thinking and analysis in the arguments presented in Louis Wirth’s article on “Urbanism as a Way of Life” (1938).
Paper 3: Films and Themes
For the third paper, students should pick one theme and trace it through three different films and then add some new observations to extend beyond the films.
Final Paper
For the final paper, students should select either one film and explore 2–3 themes throughout it, or alternatively select one of them and discuss it in the context of 2–3 films. Importantly, although students may certainly draw on their knowledge of the films from the syllabus, they are expected to do “outside research,” identifying films about cities that have not yet been screened or discussed in class.
|
common_crawl_ocw.mit.edu_130
|
Assignment
For the 13 weeks of the class, we’ve seen films that I’ve selected, which often highlighted themes that I felt were particularly relevant to explore changing ideas about cities. We’ve also spent much of our time looking and some pretty old films, representing my idea that we, as scholars (and practitioners!) of urban studies and planning, need to build up our foundations from the historical record; the syllabus brings us up to the cusp of the 21st century, but doesn’t include anything from the last 15 years (when most of you have probably formed you own ideas about cities and the issues that define and confront them).
This focus was intentional, but also limiting: “What’s past is prologue,” as they say, but it is not the end of the story, and I expect that the next 100 years will have a lot of new things to say about both cities and films. Similarly, the films in the course have all been set in an American or European context; I expect that city films from other places might raise different issues—an important point for us to consider, given the rate of urbanization in Africa, Asia, and Latin America.
Luckily, the final paper will give us a chance to address these shortcomings; now it’s your turn to pick the films and decide what themes you want to explore. The assignment—and it is intentionally very open-ended—is to select either one film and explore 2–3 themes through it, or alternatively, to select one theme and discuss it in the context of 2–3 films. Importantly, although you may certainly draw on your knowledge of the films from the syllabus, you are expected to do “outside research,” identifying films about cities that we have not yet screened or discussed. (Note: Although I’m offering the chance to include more recent films, you can also decide to use old / classic films if you like—and if you are exploring a change in attitudes over time, you will definitely want to include some older films.)
Some Examples
Hopefully, it’s obvious what I mean when I say, “select…one film and explore 2–3 themes through it”—this is what we’ve been doing all semester. Do remember, however, that you don’t need to limit your discussion to just the one film: To strengthen your analysis, you may want to connect the issues and ideas you observe in one film with references to films from other times or periods; all I ask is that if you select this first option, the emphasis of your paper is to unbundle as much as you can from one film that we haven’t yet discussed. As for the second option—“select one theme and discuss it in the context of 2–3 films”—some examples might help:
- Urban decay (or even “ruin porn”) in films such as 8 Mile (2002) and the new Brick Mansions (2014; itself a remake of the 2004 French film, Banlieue 13);
- Car culture and the city, in films such as American Graffiti (1973), To Live and Die in L.A. (1985), and Drive (2011);
- Urban existence “After the Apocalypse,” in films such as 28 Days Later (2002) or I Am Legend (2007);
- The city in children’s films, such as The Muppets Take Manhattan (1984), Who Framed Roger Rabbit (1988), the “Rhapsody in Blue” segment from Fantasia 2000 (2000), and Hugo (2011);
- Life in informal settlements, viewed from films such as City of God (2002) or Slumdog Millionaire (2008);
- A discussion of multiple films set in the same city, perhaps made in different eras (besides the obvious candidates of New York and Los Angeles, good options might be Chicago, New Orleans, San Francisco, Washington, or Boston; but you don’t need to limit yourself to American cities…);
- A comparison of three films with the same story (a newcomer finds his / her way in the city; two people from “opposite sides of the tracks” fall in love; a poor person struggles to overcome unemployment; someone gets lost / trapped in the city / a neighborhood and can’t get out, etc.) from three different parts of the world, or from three different historical periods.
Don’t forget: Whenever possibly, please try to connect your ideas with the films and the readings we discussed in class. Every new thing you discover or learn is made all the more meaningful to the extent that you fit it into the fabric of what you already know, through comparison, contrast, refinement, and other techniques of synthetic knowledge generation.
Details
Length
The total length for the final paper should be 10–12 pages, although it can certainly be broken down into 2–3 shorter sections if that works better for you. Importantly, rather than focusing on the page count, focus on what you want to say, and use the pages you need to say it (and no more).
Other Things to Include
- Be sure you give your paper a title.
- Number your pages and include your name on each one.
- You don’t need to include images, but you can if you want; both words and pictures can be useful when observing and describing films (and cities).
- As you write your ideas, you may want to review the Corrigan book, A Short Guide to Writing about Film.
Deadline & Submission
This paper is due at the final class session.
Student Examples
The examples below appear courtesy of MIT students and are used with permission. Examples are published anonymously unless otherwise requested.
Not One But Thousands: Individuals Experience the City (PDF)
|
common_crawl_ocw.mit.edu_131
|
Assignment
Although this is a class about films, our first writing assignment is actually just about observation. Film is a wonderfully rich medium because it includes sound and vision in motion through space and time; but if you stop and think for a moment, you realize that actual real life features all of these elements as well. So before we analyze The City in Film, we’ll turn our critical eye to The City Around Us.
For this assignment, you are asked to select, observe, and describe a single “scene” from the life of the city around you. Some examples might include:
- A crowd of people lining up at rush hour to get onto the Red Line at Central Square;
- a couple walking along Memorial Drive in the snow;
- a slow pan down a Back Bay alley, exploring garbage, graffiti, back doors, and utility cabinets;
- the view of the skyline from a roof of MIT, where a group of young astronomers have gathered to observe the Leonids;
- a homeless person watching as people pass by; or
- the contrast between the window displays of two neighboring stores: One advertising vodka, the other back-to-school supplies.
These are, of course, just samples—please don’t write about these ones. Walk around a bit, think about a place or a time that really evokes something about living in the city to you, and write it up. Importantly, just describe it, as if this were an excerpt from a screenplay; don’t interpret it for us. (One of the key rules of good art is “Show, don’t tell.”)
Before writing up your “scene,” I recommend you re-read the “Looking at Cities” article by Allan Jacobs, as well as the sections from Corrigan listed in the syllabus (perhaps even skimming some of chapters 3 and 4). These former will help you think about how we observe places, the latter about how we observe scenes.
Details
Length
This is a short paper—please aim for a target of 2–3 pages (approximately 500–750 words). The goal is to present and analyze a few keen, focused observations, not a comprehensive analysis of everything about a city or a neighborhood. Decide what you want to say in advance, strive for tight writing, and revise as necessary to make every word count; remember the three keys to strong writing: trim, TRIM, TRIM.
Other Things to Include
- Be sure you give your paper a title.
- Number your pages and include your name on each one.
- You don’t need to include photos or diagrams, but you can if you want; both words and pictures can be useful when observing and describing cities (and films).
Deadline & Submission
This paper is due at the beginning of Week 4.
Student Examples
The examples below appear courtesy of MIT students and are used with permission. Examples are published anonymously unless otherwise requested.
|
common_crawl_ocw.mit.edu_132
|
Assignment
In class, in the readings, and through films, we’ve explored different aspects of urban life—the ways that people in cities interact with each other and with public space; the challenges, opportunities, and contradictions confronted by city-dwellers; and even the possibility that the physical, social, and economic systems and structures of the city might be acting (either subtly or bluntly; either accidentally or by design) to shape, mold, model, constrain, enable, stratify, homogenize, or otherwise “urbanize” residents.
Looking closely at the scenes and themes in the films so far, we’ve found evidence of these aspects of the city in Berlin, New York, Rome, Los Angeles, and Neubabelsberg. For this assignment, you are asked to build on our discussions and conduct a close-reading of just two scenes or sequences to explore what they say about the experience of living in cities. Although you have free choice to explore any two scenes you choose from the first six films we’ve seen, I am asking you to ground your thinking and analysis in the arguments presented in Louis Wirth’s article on “Urbanism as a Way of Life” (1938).
In this seminal work of urban sociology, Wirth attempts “to set forth a limited number of identifying characteristics of the city” (p. 8). The abstract provides a nice inventory of most of these features, which in the end represents a pretty impressive “limited number”:
Large numbers account for individual variability, the relative absence of intimate personal acquaintanceship, the segmentalization of human relations which are largely anonymous, superficial, and transitory, and associated characteristics. Density involves diversification and specialization, the coincidence of close physical contact and distant social relations, glaring contrasts, a complex pattern of segregation, the predominance of formal social control, and accentuated friction, among other phenomena. Heterogeneity tends to break down rigid social structures and to produce increased mobility, instability, and insecurity, and the affiliation of the individuals with a variety of intersecting and tangential social groups with a high rate of membership turnover. The pecuniary nexus tends to displace personal relations, and institutions tend to cater to mass rather than to individual requirements. The individual thus becomes effective only as he acts through organized groups.
As you begin to organize your thoughts for this paper, re-read the Wirth article, as well as the readings from Corrigan, which are there to help you analyze and write about films. Reflect on whether you recall evidence of any of Wirth’s “characteristics of urban life” in the movies we’ve seen, and select two scenes or sequences to compare and contrast. Note that these film segments may support Wirth’s arguments, but they could also contradict his views: He was a pretty smart cookie, but does not need to be the final word on urbanization. Even more exciting, your argument may somehow complicate or complexify the question beyond what Wirth covers in his short article. (Check it out—we can break free from the constraints of binary thinking…)
You may (read: will) find it helpful (read: necessary) to re-watch your chosen scenes a few times, stopping the film as necessary to make notes and capture all the relevant details.
Details
Length
This is a short paper—please aim for a target of 2–3 pages (approximately 500–750 words). The goal is to present, analyze, and support a few keen, focused observations, not a comprehensive analysis of everything about the scenes or the urban experience. Decide what you want to say in advance, strive for tight writing, muster your evidence and weave it in to support your argument, and revise as necessary to make every word count.
Other Things to Include
- Be sure you give your paper a title.
- Number your pages and include your name on each one.
- You don’t need to include photos or diagrams, but you can if you want; both words and pictures can be useful when observing and describing cities (and films). For this particular assignment, you may find that including still images from the film can really help illustrate your points.
Deadline & Submission
This paper is due at the end of Week 7.
Student Examples
The examples below appear courtesy of MIT students and are used with permission. Examples are published anonymously unless otherwise requested.
“You’re All Thieves”: The Individual vs the City in Bicycle Thieves (PDF)
|
common_crawl_ocw.mit.edu_133
|
Assignment
One Theme, Three Films
Throughout the class, we’ve used film as a way to explore and discuss a number of different “themes” related to cities. Some films (and some students) have been concerned primarily with physical aspects of the urban environment: The iconic landmarks; the legibility of the landscape; the glitz and the grime; the role of transportation; the importance of neighborhoods and “local turf;” or the general look and feel of the buildings, the street, and the crowd. Others have emphasized social, cultural, or even personal aspects: The perception of safety in the city; the relative importance of close and distant social ties; issues raised by race, class, sex, gender, language, and ethnic diversity; or broad themes of freedom, control, opportunity, modernity, isolation, and social mobility. Sometimes the films we’ve seen may have echoed each other in regards to these themes, but other times they have presented contrasting or changing perspectives, or raised new wrinkles or additional complications. This has given us a lot to think about and a lot to sort out as we make sense of the city in film. For your third paper, you are asked to pick one theme and trace it through three different films—and then also add some new observations to extend beyond the films (see “Epilogue” below). You may (read: will) find it helpful (read: necessary) to re-watch your chosen scenes a few times, stopping the film as necessary to make notes and capture all the relevant details. It will also be important to look back over the syllabus and pull in the readings as they related to your chosen topic.
Epilogue
Once you’ve written a nice tight essay, you have one remaining task: Connecting the topics of the course to the world around us today. In a one-page “epilogue,” briefly describe an event or story in the news and connect it to your theme. Discuss how the ideas presented in the films help you to think more deeply about events, policies, people, and places in the city, and vice versa. (Please also include a copy of the news story you used as an addendum.)
Details
Length
This is a short paper, but slightly longer than the first two—please aim for a target of 4–5 pages, including the one-page “epilogue.” The goal is to present, analyze, and support a few keen, focused observations, not a comprehensive analysis of everything about the films you discuss. Decide what you want to say in advance, strive for tight writing, muster your evidence and weave it in to support your argument, and revise as necessary to make every word count.
Other Things to Include
- Be sure you give your paper a title.
- Number your pages and include your name on each one.
- You don’t need to include photos or diagrams, but you can if you want; both words and pictures can be useful when observing and describing cities (and films). For this particular assignment, you may find that including still images from the film can really help illustrate your points.
- When referring to specific scenes, please indicate (in parenthesis) the time in the film where I can find the part you are writing about—for example, “at the start of the next sequence (36m:20s), we see the city turn from working to eating—it’s lunchtime in the Great City.”
Deadline & Submission
This paper is due at the beginning of Week 11.
Student Examples
The examples below appear courtesy of MIT students and are used with permission. Examples are published anonymously unless otherwise requested.
“Meet Cute” and Impossible Love in the City (PDF)
|
common_crawl_ocw.mit.edu_134
|
Overview
Students are expected to watch films attentively, with an active mind; although all of these films are certainly entertaining, we are viewing them as more than entertainment. To help facilitate this, and to generate ideas for papers and class discussion, students are required to prepare and submit notes on each film prior to the discussion session following each film. Since we will be watching films in the dark, you may want to purchase a small book-light for note-taking; laptops, tablets, and other computers cannot be used.
These notes will be graded pass / fail and are required for 12 of the 13 films in class. Taken together, these points will count for 24% of your final grade for the class. Please pay special attention to the deadlines described above: Late notes will be accepted, but will not be given credit. To help you prepare notes, this handout lists a number of questions you must answer, as well as some more general questions to just think about.
Questions
Questions to Answer in Your Notes
For each film, your notes must answer the following questions.
-
Who was the Director?
-
(a) What year was the film made?
(b) What year was it set in?
-
(a) What city was the film set in?
(b) Where do you think it was shot?
-
Jot down five adjectives or phrases to describe the sense of the city portrayed in the film. What kind of place is it? Be as descriptive and specific—and nuanced—as possible: There are a lot of rich, descriptive words out there waiting around patiently, just dying for their chance to get used. Think about how this city looks, sounds, feels—but also how it behaves: If the city were a character in this film, how would you describe its motivation or personality?
-
Briefly describe one remarkable scene—ideally one related to the subject of this course. Be sure to also explain why you choose it, and what you think it tells us about the ideas about cities presented or explored in the film?
-
Pose at least two questions you’d like to think more about or discuss in class.
-
Draw one parallel or contrast between this film and another film you’ve seen (either in this class or elsewhere), or—alternatively—some sort of real-world place or urban scene you experienced.
Questions to Think About and Maybe Answer in Your Notes
Beyond the items mentioned above, consider the following questions, and add your thoughts to you notes if you want.
- Could the film have been set somewhere else? How might this have made it a different film?
- Did the city and the places in this film seem “realistic” to you, or somehow fantastic, mythical, imaginative, or surreal? (Or something else? Or a mix?)
- How do the characters get around the city? How do they move through the physical space of the urban environment, and what does that signal to you about city life?
- How else do the characters interact with the typical elements of urban life—taxis, trains, beat-cops, payphones, lunch-counters, crowds, elevators, pot-holes, muggers, businessmen, plate-glass windows, benches, neon-signs, garbage, glitz, high-society dames, homeless people, and the like?
- Are there any elements of the city that you found notable absent from the film?
- Looking back at the adjectives you used the previous series of questions (item 4 on the preceding list), do you think the film suggests that these characteristics apply to cities in general, or just this city in particular?
- Is there anything else about the film and the ways it depicts the city that you’d like to remember, or to call attention to for you classmates?
Filmericks
To help liven up the class a bit (as if all these great city films isn’t enough!), and also to help us all keep the films straight, I’m challenging you to come up with limericks for each film, which you can include in your weekly film notes. Writing a few of my own, I think I may have invented a new art form: the “filmerick.” Here’s what I came up with for a few of the films:
Metropolis
Joh Frederson’s city is smart,
The brains tell the brawn when to start.
But inspired by Hel,
The workers rebel:
The head and the hands need a heart.
Berlin: Symphony of a City
Made from hundreds of meters of stock,
And covering block upon block,
This film, like a rhyme,
Shows a town keeping time:
Berlin is one big cuckoo clock.
Modern Times
With all of its plot twists and swerves,
This film, like a clarion, serves
To gives that impression
That the Great Depression
Did a hell of a job on our nerves.
Bicycle Thieves
De Sica shoots Rome neo-real,
The poor have been dealt a raw deal.
A bike is required
Or Ricci gets fired:
All men must eventually steal.
|
common_crawl_ocw.mit.edu_135
|
Course Overview
This page focuses on the course 11.139 The City in Film as it was taught by Ezra Haber Glenn in Spring 2015.
Using film as a lens to explore and interpret various aspects of the urban experience in both the U.S. and abroad, this course presents a survey of important developments in urbanism from 1900 to the present day, including changes in technology, bureaucracy, and industrialization; immigration and national identity; race, class, gender, and economic inequality; politics, conformity, and urban anomie; and planning, development, private property, displacement, sprawl, environmental degradation, and suburbanization..
Course Outcomes
Course Goals for Students
- Critically examine cities, films about cities, and cultural attitudes and perspectives about urban life and urban issues depicted in films
- Use techniques of close-reading and textual-analysis to interpret meaning (both implicit and explicit) in the language of cities and films
- Learn to think about the changing nature of cities over the past 100 years – initially in an American/European context, but with implications and extensions for other rapidly urbanizing areas
- Express and discuss ideas about both films and cities through written and oral arguments, using visual evidence to support arguments
Curriculum Information
Prerequisites
None
Requirements Satisfied
- CI-H
- HASS
- HASS-H
11.139 can be applied towards a Bachelor of Science in Planning, but is not required.
Offered
Every spring semester
Instructor Insights
Below, Ezra Haber Glenn describes various aspects of how he teaches 11.139 The City in Film_._
Course History
This course grew out of the MIT Urban Planning Film Series, which I’ve curated since about 2008. Roughly every other Thursday, we’d show films on topics related to urban planning. The screenings were open to students and members of the community. In 2014, I created this course to give more structure to the series and to provide an option for students who wanted to delve more deeply into what films could teach us about cities.
The first year I taught this course it was a “pilot” and was only open to graduate students – about twelve in all, most of whom were enrolled in Urban Planning and Architecture programs. Based on feedback from these students, the course grew into its current state.
Role of Humor
I try to use humor as a tool to loosen people up, get them thinking, talking, and prepared to take some risks, which is really what learning is all about. It also helps, of course, that a number of the films are quite funny – Chaplin’s “Modern Times” or Tati’s “Play Time,” for sure, but even tragedies like “West Side Story” or “Midnight Cowboy” have a good deal of humor mixed in to help give the audience some perspective on the kind of thing that is a city. The trickiest thing with humor is that if people don’t catch it just right, it can seem odd or even snarky.
Facilitating Classroom Discussions
I tried to structure our classroom discussions around observations about cities and the common themes, parallels, and contrasts that emerged over the semester. This focus helped avoid falling into the pitfall of talking about what we liked, what we hated, and whether we thought the films were “good” or “bad,” and so on. The key is to be a critic – thinking critically about the issues the films present – and not simply a reviewer.
I try to make sure everyone contributes right at the beginning of each discussion. To do this, I often run the class as a brainstorming session during which we all throw ideas up on the board before deciding which ones to dig into as a group. This way, we can quickly hear from everyone and gather a wide range of possible topics, without the danger of taking positions too soon, which can often stifle good deep discussions.
Another technique for facilitating discussions is to break up into pairs or small groups to discuss a question or analyze a theme, and to then reconvene to bring all the ideas in the room together, looking for common responses or unique perspectives.
Student Feedback
Student feedback is crucial to a subject like this, which is fundamentally about how we – plural – think about and react to both cities and film. There are probably some places in academia where the authority of the professor or other expert is important, but cities tend to be much more diverse and multi-faceted than that. Films, which have been one of our most democratic and populist art forms since their invention, share in this pluralist tradition. So student input is important, both in any particular discussion and also in the overall shaping of the course.
Over the past few years, it’s been clear that students are interested in including in the syllabus very recent films, as well as films from non-Western cities. I’m still tweaking the lineup – which admittedly emphasizes the European and American world, where both film and the modern city began. To compensate for this emphasis, I include a “viewer’s choice” at the end of the semester.
In addition, in the final written assignment, I encourage students to bring their own interests, backgrounds, and questions to the topic. I learn a lot from reading students’ assignments and will integrate their ideas into future iterations of the course.
Assessment
The students’ grades were based on the following activities:
- 24% Weekly film notes
- 20% Class participation
- 36% Short papers
- 20% Final film essay
Student Information
Enrollment
13 students
Breakdown by Year
Half undergraduates and half graduate students
Typical Student Background
The graduate students were all from Urban Planning and Architecture students, but the undergraduates came from a wide range of majors including urban planning, computer science, engineering, and anthropology.
How Student Time Was Spent
During an average week, students were expected to spend 12 hours on the course, roughly divided as follows:
In Class
- Met 2 time per week for 1.5 hours per session; 24 sessions total
- Analyzed the film and the accompanying readings for each week
- Discussed paper topics, oral presentations, readings, and additional film clips
Film Screening
- Met 1 time per week for 3 hours per session; 13 sessions total
- Watched the film of the week
Out of Class
- Completed course readings
- Composed writing assignments
|
common_crawl_ocw.mit.edu_136
|
Course Meeting Times
Lectures: 1 session / week, 1.5 hours / session
Undergraduate Recitations: 1 session / week, 1.5 hours / session
Film Screenings: 1 session / week, 3 hours / session
Course Overview
Over the past 150 years, the world has moved from one characterized by rural settlement patterns and provincial lifestyles to one dominated by urbanization, industrialization, immigration, and globalization. Interestingly, the history of this transformation overlaps nearly perfectly with the development of motion pictures, which have served as silent—and then talking—witnesses to our changing lifestyles, changing cities, and changing attitudes about the increasingly urban world we live in. Through the movies—both documentaries and feature films—we are able to see, hear, and share the lived experiences of urban dwellers around the world and across more than twelve decades.
Using film as a lens to explore and interpret various aspects of the urban experience in both the U.S. and abroad, this course presents a survey of important developments in urbanism from 1900 to the present day, including changes in technology, bureaucracy, and industrialization; immigration and national identity; race, class, gender, and economic inequality; politics, conformity, and urban anomie; planning, development, private property, displacement, sprawl, environmental degradation, and suburbanization; and more.
The films shown in the course vary from year to year, but always include a balance of “classics” from the history of film, an occasional experimental / avant-garde film, and a number of more recent, mainstream movies. (See below for this year’s schedule.)
Prerequisites
None.
|
common_crawl_ocw.mit.edu_137
|
11-139s15.jpg
Description:
With its futuristic buildings and multicolor neon lights, Beijing is beginning to resemble the city from “Blade Runner,” one of the films analyzed in this course. Image courtesy of Trey Ratcliff on Flickr. CC BY-NC-SA 2.0.
file
74 kB
11-139s15.jpg
Alt text:
A photograph of a city at night time. There is a plaza in the foreground, a large glass building in the middle, and a city skyline in the background. Many of the buildings have neon lights of all colors. The lights are reflected off the glass building.
Caption:
With its futuristic buildings and multicolor neon lights, Beijing is beginning to resemble the city from “Blade Runner,” one of the films analyzed in this course. (Image courtesy of Trey Ratcliff on Flickr. CC BY-NC-SA 2.0.)
Credit:
Image courtesy of Trey Ratcliff on Flickr. CC BY-NC-SA 2.0.
Course Info
Instructor
Departments
As Taught In
Spring
2015
Level
Learning Resource Types
assignment_turned_in
Written Assignments with Examples
Instructor Insights
|
common_crawl_ocw.mit.edu_138
|
11-139s15-th.jpg
Description:
With its futuristic buildings and multicolor neon lights, Beijing is beginning to resemble the city from “Blade Runner,” one of the films analyzed in this course. Image courtesy of Trey Ratcliff on Flickr. CC BY-NC-SA 2.0.
file
12 kB
11-139s15-th.jpg
Alt text:
A photograph of a city at night time. There is a plaza in the foreground, a large glass building in the middle, and a city skyline in the background. Many of the buildings have neon lights of all colors. The lights are reflected off the glass building.
Caption:
With its futuristic buildings and multicolor neon lights, Beijing is beginning to resemble the city from “Blade Runner,” one of the films analyzed in this course. (Image courtesy of Trey Ratcliff on Flickr. CC BY-NC-SA 2.0.)
Credit:
Image courtesy of Trey Ratcliff on Flickr. CC BY-NC-SA 2.0.
Course Info
Instructor
Departments
As Taught In
Spring
2015
Level
Learning Resource Types
assignment_turned_in
Written Assignments with Examples
Instructor Insights
|
common_crawl_ocw.mit.edu_139
|
Required Texts [JD] = Donnelly, Jack. Universal Human Rights in Theory & Practice. 3rd ed. Cornell University Press, 2013. ISBN: 9780801477706. [Preview with Google Books] [DH] = Hurwitz, Deena, Margaret Satherthwaite, and Douglas Ford. Human Rights Advocacy Stories. Foundation Press, 2008. ISBN: 9781599411996. [BR] = Rajagopal, Balakrishnan. International Law from Below. Cambridge University Press, 2003. ISBN: 9780521016711. [Preview with Google Books] Recommended Text Background in International Law– Janis, Mark. International Law. 6th ed. Aspen Publishers, 2012. ISBN: 9781454813682. SES # TOPICS READINGS 1 Course introduction Introduction to human rights as a ‘system’ or ‘regime’ and the US BBC, The Why factor, “Why do we have human rights?” No readings assigned 2 What are human rights? History, philosophy and character of the rights discourse The Universal Declaration of Human Rights. International Covenant on Civil and Political Rights. International Covenant on Economic, Social and Cultural Rights. [JD] Chapters 1, 2, and 4. Goodale, Mark. “Introduction.” In The Practice of Human Rights. Edited by Mark Goodale and Sally Engle Merry. Cambridge University Press, 2007, pp. 1–38. ISBN: 9780521683784. Recommended Pogge, Thomas. “How Should Human Rights be Conceived?” In The Philosophy of Human Rights. Edited by Patrick Hayden. Paragon House, 2001, pp. 187–210. ISBN: 9781557787903. Nussbaum, Martha. “Capabilities and Human Rights.” (PDF - 2.0MB) Fordham Law Review 66, no. 2 (1997): 273–300. 3 Key conceptual debates: Universality v. cultural relativism, public v. private, relativity of rights, liberal origins, individual v. community rights, civil v. human rights [JD] Chapters 6 and 7. [BR] Chapter 7. [DH] Bennoune, Karima. “The Law of the Republic Versus the “Law of the Brothers”: A Story of France’s Law Banning Religious Symbols in Public Schools.” Recommended Waldron, Jeremy. “How to Argue for a Universal Claim.” Columbia Human Rights Law Review 30, no. 305 (1998–1999): 305–14. Taylor, Charles. “A World Consensus on Human Rights?” Dissent, 1996. Beitz, Charles. “Human Rights as Common Concern.” American Political Science Review 95, no. 2 (2001): 269–82. 4 Sovereignty and self-determination?: The impact of North-South divisions, the West v. Rest [JD] Chapter 5. [BR] Chapters 3–6. Mutua, Makau. “Savages, Victims and Saviors: The Metaphor of Human Rights.” Harvard International Law Journal 42, no. 1 (2001): 201–45. Recommended Moyn, Samuel. Chapter 3 in The Last Utopia: Human Rights in History. Belknap Press, 2012. ISBN: 9780674064348. 5 Human rights at home: The struggle to enforce human rights in the US Dudziak, Mary L. “Desegregation as a Cold War Imperative.” Stanford Law Review 41, no. 1 (1988): 61–120. Cynthia Soohoo, Catherine Albisa, and Martha F. Davis, eds. Chapters 1, 5, and 7 in Bringing Human Rights Home: A History of Human Rights in the United States. University of Pennsylvania Press, 2009. ISBN: 9780812220797. Davis, Martha. “Occupy Wall Street and International Human Rights.” (PDF) Fordham Urban Law Journal 39, no. 4 (2012): 931–58. Anderson, Carol. Chapter 5 in Eyes Off the Prize: The United Nations and the African American Struggle for Human Rights, 1944–1955. Cambridge University Press, 2003. ISBN: 9780521531580. “National Report Submitted in Accordance with Paragraph 5 of the Annex to Human Rights Council Resolution 16/21*” (PDF) United Nations General Assembly, 2015. “UPR Submission, United States.” (PDF) Response by Human Rights Watch, 2014. Recommended Alexander, Michelle. The New Jim Crow: Mass Incarceration in the Age of Colorblidness. The New Press, 2012. ISBN: 9781595586438. Ignatieff, Michael, ed. American Exceptionalism and Human Rights. Princeton University Press, 2005. ISBN: 9780691116488. 6 Security v. human rights: Torture, assassinations and the War on Terror Rejali, Darius. Chapters 21, 22 and 23 in Torture and Democracy. Princeton University Press, 2007. ISBN: 9780691114224. Luban, David. Chapters 1 and 2 in Torture, Power, and Law. Cambridge University Press, 2014. ISBN: 9781107656291. [DH] Huckerby, Jayne, and Sir Nigel Rodley. “Outlawing Torture: The Story of Amnesty International’s Efforts to Shape the UN Convention Against Torture.” [DH] Satterthwaite, Margaret. “The Story of El Masri v. Tenet: Human Rights and Humanitarian Law in the “War on Terror”.” Report of the “Senate Select Committee on Intelligence (CIA Torture Report).” (PDF - 1.6MB) 2014, pp. 1–19 (Findings and Conclusions). Recommended Alston, Philip. “The CIA and Targeted Killings beyond Borders.” (PDF) New York University School of Law Public Law and Legal Theory Research Working Paper 11–64 (2011). “The Legal Prohibition Against Torture.” Human Rights Watch, 2003. “Alan Dershowitz and Ken Roth Debate.” CNN, March 4, 2003. Bowden, Mark. “The Dark Art of Interrogation.” Atlantic Monthly, October 2003. Browse through the Various “Bush Administration Documents on Interrogation,” The Washington Post, June 23, 2004. 7 Economic development, globalization and poverty [BR] Chapter 2. [JD] Chapter 13. Rajagopal, Balakrishnan. “Right to Development and Global Governance: Old and New Challenges Twenty-Five Years On.” Human Rights Quarterly 35, no. 4 (2013): 893–909. [DH] Narula, Smita. “The Story of Narmada Bachao Andolan: Human Rights in the Global Economy and the Struggle against the World Bank.” Pogge, Thomas. “Recognised and Violated by International Law: The Human Rights of the Global Poor.” Leiden Journal of International Law 18, no. 4 (2005): 717–45. Marks, Susan. “Human Rights and the Bottom Billion.” European Human Rights Law Review 1 (2009): 37–49. Vizard, Polly, Sakiko Fukuda‐Parr, et al. “Introduction: The Capability Approach and Human Rights.” Journal of Human Development and Capabilities: A Multi-Disciplinary Journal for People-Centered Development 12, no. 1 (2011): 1–22. Recommended Marks, Susan. “Human Rights and Root Causes.” The Modern Law Review 74, no. 1 (2011): 57–78. Righting Wrongs." The Economist, August 2001, 18–20. Pogge, Thomas. “World Poverty and Human Rights.” Ethics & International Affairs 19, no. 1 (2005): 1–7. ———. “The First Millenium Development Goal.” Paper Delivered at Carnegie Council, 2003. Reddy, Sanjay. “Counting the Poor: The Truth about World Poverty Statistics.” (PDF) Socialist Register (2006): 168–79. Bhagwati, Jagdish, and T. N. Srinivasan. “Trade and Poverty in the Poor Countries.” (PDF) Human Rights and Extreme Poverty, Report of the Independent Expert to the UN Council on Human Rights, E/CN.4/2005/49, 11 February 2005. Rahnema, Majid. “Poverty.” In The Development Dictionary. Edited by Wolfgang Sachs. St. Martin’s Press, 1992, pp. 158–76. ISBN: 9781856490436. Sen, Amartya. Chapters 1, 2, and 4 in Development as Freedom. Oxford University Press, 1999, pp. 13–53 and 87–110. ISBN: 9780198297581. 8 Gender, equality, and sexual minorities [JD] Chapter 16. Brown, Wendy. “Suffering the Paradoxes of Rights.” In Left Legalism/Left Critique. Edited by Wendy Brown and Janey Halley. Duke University Press, 2002, pp. 420–34. ISBN: 9780822329688. Halley, Janet, et al. “From The International To The Local In Feminist Legal Responses To Rape, Prostitution/Sex Work, And Sex Trafficking: Four Studies In Contemporary Governance Feminism.” (PDF) Harvard Journal of Law and Gender 29 (2006): 36. [DH] Bromley, Mark, and Kristen Walker. “The Stories of Dudgeon and Toonen: Personal Struggles to Legalize Sexual Identities.” Obergefell v. Hodges, 576 U.S.________(2015) Recommended Merry, Sally Engle. “Rights Talk and the Experience of Law: Implementing Women’s Human Rights to Protection from Violence.” Human Rights Quarterly 25, no. 2 (2003): 343–81. Human Rights Watch. “Taking Cover: Women in Post-Taliban Afghanistan.” May 2002. (Briefing paper) 9 Ethnic, religious and racial violence and group rights [JD] Chapter 3. [DH] Anaya, S. James, and Maia S. Campbell. “Gaining Legal Recognition of Indigenous Land Rights: The Story of the Awas Tingni Case in Nicaragua.” Kymlicka, Will. “The Good, the Bad, and the Intolerable: Minority Group Rights.” Dissent, 1996. Engle, Karen. “On Fragile Architecture: The UN Declaration on the Rights of Indigenous Peoples in the Context of Human Rights.” European Journal of International Law 22, no. 1 (2011): 141–63. Recommended Kymlicka, Will. Chapter 2 in Multicultural Odysseys: Navigating the New International Politics of Diversity. Oxford University Press, 2007. ISBN: 9780199280407. Skim through Human Rights Watch. ““We have no Orders to Save You”: State Participation and Complicity in Communal Violence in Gujarat.” 2002. Skim through Human Rights Watch. “Broken People: Caste Violence Against India’s Untouchables.” 1999. Mutua, Makau. “Limitations on Religious Rights: Problematizing Religious Freedom in the African Context.” Buff Human Rights Law Review 5, no. 75 (1999). Power, Samantha. “Bystanders to Genocide.” The Atlantic Monthly, September 2001. 10 Forcible Intervention versus sovereignty [JD] Chapter 15. Todorov, Tzvetan, and Michael Ignatieff. “Right to Intervene or Duty to Assist?” and “Human Rights, Sovereignty and Intervention.” In Human Rights, Human Wrongs. Edited by Nicholas Owen. Oxford University Press, 2002, pp. 28–87. ISBN: 9780192802194. [Preview with Google Books] Farer, Tom. “Humanitarian Intervention Before and After 9 / 11: Legality and Legitimacy.” In Humanitarian Intervention: Ethical, Legal and Political Dilemmas. Edited by J. L. Holzgrefe and Robert O. Keohane. Cambridge University Press, 2003, pp. 53–89. ISBN: 9780521821988. [Preview with Google Books] Cushman, Thomas. “The Liberal Case for the War in Iraq.” In Human Rights in the ‘War on Terror’. Edited by Richard A. Wilson. Cambridge University Press, 2005, pp. 78–107. ISBN: 9780521618335. Rieff, David. Part II, “The Lives they Lives.” to “The Way we Live Now.” In At the Point of a Gun: Democratic Dreams and Armed Intervention. Simon & Schuster, 2013. ISBN: 9780743287074. Rieff, David. “R2P, R.I.P.,” The New York Times, November 7, 2011. (Op-Ed Page) Recommended "The Responsibility to Protect, Report of the International Commission on Intervention and State Sovereignty." (PDF - 3.8MB) 2001. Berman, Nathaniel. “Intervention in a ‘Divided World’: Axes of Legitimacy.” European Journal of International Law 17, no. 4 (2006): 743–69. “Report of the Secretary-General’s High-level Panel on Threats, Challenges and Change.” (PDF - 1.8MB) (United Nations, 2004) Koskenniemi, Martti. “The Lady Doth Protest too Much: Kosovo, and the Turn to Ethics in International Law.” The Modern Law Review 65, no. 2 (2002): 159–75. 11 Open class for discussion and / or a topic to be chosen by students Conclusion and wrapping up No readings assigned
|
common_crawl_ocw.mit.edu_140
|
Course Meeting Times
2 sessions / week, 1.5 hrs / session
Prerequisites
None
Course Description
This class is about figuring out together what cities and users can do to reduce their energy use and carbon emissions. Many other classes at MIT focus on policies, technologies, and systems, often at the national or international level, but this course focuses on the scale of cities and users for the following reasons:
- Cities are centers of economic activity, population, and energy and material consumption.
- Cities, not nations, are making the most ambitious commitments towards climate goals.
- This scale reveals inequality, racism, and environmental justice issues in the energy system.
- The relationship of users to the energy system has been static for nearly a century.
- New information and data technologies are rapidly changing the built environment.
- Developing countries could leapfrog existing technologies, and many developed countries need to replace existing systems.
This course is designed for any students interested in learning how to intervene in the energy use of cities using policy, technology, economics, and urban planning. I welcome students with many different backgrounds because it enriches our discussions, but some of the following rationales for this course may also appeal to you:
- For planners, there are many jobs in this area that will shape how we use energy in the future. This class will integrate fundamental technical understanding with your policy skills so you can tackle the inevitable energy and climate issues that will affect all communities in the future.
- For engineers, 54% of all people now live in cities that generate 70% of world carbon emissions and 80% of world GDP; by 2050, 66% of the world’s population is expected to be urban. The focus of this class on urban energy use, efficiency, jurisdiction, institutions, and governance complements many other more technical classes at MIT.
- For climate change: given the uncertain prospects of national and international efforts, efforts in cities may be the fastest and most pragmatic solution.
These topics are especially exciting in this place and this year, given that the US Congress has passed its first climate legislation in thirty years, and Massachusetts continues to pass aggressive climate legislation.
Learning Objectives
- Learn about the role and potential of cities and users to shape the energy system
- Develop understanding of energy systems, infrastructure, and technology in cities
- Develop ability to do simple back-of-the-envelope calculations
- Understand what an equitable energy transition will look like
- Identify key points or issues for future management, intervention, or revolution
- Work together with a diverse group of people and disciplines
Structure of the Course
The semester is divided into two halves:
- In the first segment you will learn which basic calculations to perform in order to analyze one or two cities (more on that later), and we will learn about key technical aspects of energy systems in all cities.
- In the second segment we will examine the policies and institutions governing urban energy systems, with a particular focus on regulation and markets of the electricity sector in the US. Discussion and feedback will help you build up the base of knowledge and material that you need to write your paper.
Putting the two halves together will help you decide where and how to intervene in urban energy systems.
Activities
In the beginning of class, we will build a composite picture of our class, using our personal experiences and visions for the future to energy systems that you are familiar with. Please calculate the current carbon emissions for yourself and/or an average resident for where (a) you lived before MIT and (b) where and how you think you will live in 2050, using the CoolClimate calculator.
For each lecture, we will be “flipping” the classroom; I will record a before class that highlights key issues from the reading material and that sets the stage for our class exercises and discussion. Then, in each class, we will discuss (a) recent news developments, (b) the class assignment, which will take the form of short problem sets that either test you on reading comprehension or basic calculation exercises, and (c) a student will volunteer to help stimulate discussion and debate about the complex aspects of cities, climate, and energy (the fun part!).
Readings
The primary text for the class is:
- MacKay, David J.C., 2009. Sustainable Energy—Without the Hot Air, 1st ed., UIT Cambridge Ltd. ISBN: 9780954452933. This can be downloaded legally as a PDF or read in webpage format here.
Other papers assigned for each class are listed on the page. I may occasionally modify the weekly readings, in which case I will notify you in advance.
Optional companion books focusing on materials and food may interest some of you:
- Allwood, Julian M., and Jonathan M. Cullen. 2012. Sustainable Materials: With Both Eyes Open. UIT Cambridge Ltd. ISBN: 9781906860059. This can be downloaded legally as a PDF or read in webpage format here.
- Bridle, Sarah L. 2020. Food and Climate Change Without the Hot Air: Change Your Diet: The Easiest Way to Help Save the Planet. UIT Cambridge, Ltd. ISBN: 9780857845030.
Regular News Reading
You should do regular readings to educate yourself during the class—and beyond!—on specific areas of interest. I will start most classroom days with a brief discussion about current events related to our reading. News services have greatly expanded their coverage of energy, climate, as well as related policies and legislation.
- Many major newspapers allow you to subscribe to climate- and energy-specific newsletters. Examples: the New York Times (Climate Forward); the Washington Post (Climate 202); the Wall Street Journal (WSJ Climate & Energy); the Financial Times (Climate Capital); the Boston Globe (Into the Red); the Los Angeles Times (Boiling Point).
- Magazines and journals such as the New Yorker (Bill McKibben, Elizabeth Kolbert) and the Atlantic (Robinson Meyer)
- E&E News
Other news and commentary outlets that have excellent climate coverage, including:
- U.S. and regional Energy News Networks. These newsletters are an excellent source of local and regional energy news.
- Twitter is surprisingly useful if you follow the right people and are not distracted by cute puppy videos.
It is all too much to read everyday, but learning what and how to pay attention to things that interest you is a valuable way to see what people in the energy and climate spaces are talking about. I welcome any news suggestions that you all want to discuss (the quirkier, the better!).
Schedule and Topics
Session 1: Introduction: Welcome!
Session 2: Introduction: Cities and Decarbonization; problem set 1 due
Session 3: Introduction: Equitable, Just Transition; problem set 2 due
Session 4: Built Environment and Land Use; problem set 3 due
Session 5: Consumption: Personal Transport; problem set 4 due
Session 6: Consumption: Transport Systems; problem set 5 due
Session 7: Consumption: Transportation Systems: What Can Cities Do?; problem set 6 due
Session 8: Consumption: Buildings and Energy Efficiency; problem set 7 due
Session 9: Consumption: Building Energy Policies; problem set 8 due
Session 10: Consumption: Energy Efficiency; problem set 9 due
Session 11: Sources and Systems: Industry and Making Stuff; problem set 10 due
Session 12: Sources and Systems: Fossil, CCUS, and Nuclear; problem set 11 due
Session 13: Sources and Systems: Renewable Resources; problem set 12 due
Session 14: Sources and Systems: Siting Renewables; problem set 13 due
Session 15: Sources and Systems: Distributed Resources; problem set 14 due
Session 16: Midterm exam
Session 17: Policy and Institutions: “The Grid” System; problem set 15 due
Session 18: Policy and Institutions: “The Grid” Continued; problem set 16 due
Session 19: Policy and Institutions: Regulation; problem set 17 due
Session 20: Policy and Institutions: Ownership; problem set 18 due
Session 21: Policy and Institutions: Scales and Choices; problem set 19 due
Session 22: Policy and Institutions: Possible Futures; problem set 20 due
Session 23: Wrapping Up: Group-led discussion 1
Session 24: Wrapping Up: Group-led discussion 2
Session 25: Wrapping Up: Group-led discussion 3
Session 26: Wrapping Up: Final papers due
Grading
Expectations/Norms
- Watch the video lecture, do the reading, and submit your problem sets or questions the day before class.
- Ask questions and contribute insights for everyone’s learning.
- Focus on class discussion and lecture, i.e., use technology effectively and only as needed.
Grade Breakdown
- Before class prep: problem sets and reading questions 20%
- Class presence/discussion/participation 15%
- Exam 25%
- Paper proposal 5%
- Short presentation, group discussion 5%
- Final paper 30%
Please make an effort to be on time for class, and please let me know in advance if you will miss class. Missing more than two classes will affect your participation / discussion grade.
Assignments and Due Dates
Before each class, watching my video lecture, doing the reading and a basic calculation exercise will help build up your understanding of what numbers matter, as well as your background knowledge of a particular city. We will reinforce the knowledge with an exam, but if you watch the lecture and do the reading, calculation, and homework for each class, then the exam should be fairly straightforward.
The final paper assignment will synthesize what you learn over the semester by considering the prospects for a technological or policy innovation in a city of your choosing (I recommend your home or future city). Undergraduates will be expected to write a short paper of 8 pages minimum. Graduate students will write a paper of 12 pages minimum, with the additional task of analyzing their chosen city in terms of its expected future demographic changes.
We will have group discussions in the last three classes to share knowledge from our papers. This is also a good chance to put finishing touches on your final paper. Writing a good paper is much easier if you plan ahead, get feedback or help from your classmates, the MIT Writing and Communication Center, and myself, and have time to revise.
Problem sets and reading questions are due by midnight (11:59 pm) the day before class. Earlier is better for your sleep, though! For more details on the expectations for the assignments, see the page.
Extensions
Each person is allowed to miss up to 3 problem sets and reading questions, which are assessed automatically if you miss the midnight deadline. I can’t give any extensions for the final paper because grades are due three days after the end of class, so plan ahead for this. In cases of extreme physical or emotional circumstances, any further extensions should be requested from the Office of Graduate Education; if they decide that an extension is warranted, they will then send me a generic note, which preserves your privacy.
Academic Integrity
Plagiarism, unauthorized collaboration, cheating, and facilitating academic dishonesty are academic crimes. It is your responsibility as students and scholars to understand the definition of any such activities, and to avoid and discourage them. Engaging in these activities either knowingly or unknowingly may result in severe academic sanctions, and you are therefore expected to familiarize yourself with MIT’s academic integrity policies.
|
common_crawl_ocw.mit.edu_141
|
Course Meeting Times
Lectures: 1 session / week, 1 hour / session
Recitations: 1 session / week, 1.5 hours / session
Course Description
This course focuses on methods of digital visualization and communication and their application to planning issues. Lectures will introduce a variety of methods for describing or representing a place and its residents, for simulating changes, for presenting visions of the future, and for engaging multiple actors in the process of guiding action. Students will apply these methods through a series of laboratory exercises as well as the construction of a web-based portfolio. The portfolio will serve as a container for these exercises and other work completed throughout the MCP program.
This course introduces students to (1) such persistent and recurring themes as place, race, and power face planners, (2) the role of digital technologies in representing, analyzing, and mobilizing communities, (3) MIT’s computing environment and resources including Server, Element K, the ESRI virtual campus, Computer Resources Network, Web Communications Services, the GIS Laboratory at Rotch Library and (4) software tools like Adobe’s® Photoshop®, ESRI’s ArcGIS™, Microsoft’s® Excel and Access, as well as Macromedia’s® Dreamweaver®.
Evaluation
Lab Exercises: In total, the laboratory exercises account for 50% of your grade. They are, however, weighted unevenly. Please note that labs 2, 4, 5, & 6 are each worth 10 points, while labs 1 and 3 are worth 5 points apiece.
Web-based Portfolio: You will begin thinking about and working on your web-based portfolio at the beginning of the semester and we expect that you will make improvements to it throughout the semester (ask for feedback and help at any time) and throughout your tenure at MIT. The web-based portfolio project will account for 20% of your final grade.
Final Project: The final project is a group project will constitute 30% of your final grade. Each member of the group is expected to contribute to the project, yet we expect group members to handle personal and other conflicts in a professional manner.
Finally, attendance and participation count: We expect to see you in class and expect that you will contribute to the conversation.
Lateness Policy
Turning in lab exercises promptly is important for keeping current with the subject matter, which is cumulative. As a result, we have adopted a lateness policy for exercises that are turned in after their due date. A late lab exercise will be accepted up until one week after the original due date for a loss of one grade (e.g., an “A” becomes a “B” or a “check” becomes a “check minus”). After one week, we will not accept the exercise, and you will receive a zero.
How Can We Improve?
Rather than wait until the end of the semester for feedback, we invite students to comment on the course throughout the semester. We will carefully consider suggestions submitted during the semester and implement appropriate changes along the way. If we are unable to make some changes in the current semester, they will be integrated into the next year’s course design.
|
common_crawl_ocw.mit.edu_142
|
In-Class and Take-Home Exercises by Week
All screenshots of ArcGIS and ArcMAP software in the following files are © ESRI, and all screenshots of QGIS software are © QGIS. All rights reserved. This content is excluded from our Creative Commons license. For more information, see https://ocw.mit.edu/help/faq-fair-use/.
Week 0
- In-class exercise for ArcGIS (PDF - 1.2 MB)
- In-class exercise for QGIS (PDF - 3.6 MB)
- Take-home exercise (PDF)
Week 1
- In-class exercise for ArcGIS (PDF - 2.7 MB)
- In-class exercise for QGIS (PDF - 2.9 MB)
- Take-home exercise (PDF)
Week 2
- In-class exercise for ArcGIS (PDF - 2.6 MB)
- In-class exercise for ArcGIS - optional part 2 (PDF)
- In-class exercise for QGIS (PDF - 2.2 MB)
- Take-home exercise (PDF)
Week 3
- In-class exercise for ArcGIS - part 1 (PDF - 2.1 MB)
- In-class exercise for ArcGIS - optional part 2 (PDF)
- In-class exercise for QGIS - part 1 (PDF - 2.7 MB)
- In-class exercise for QGIS - optional part 2 (PDF)
Week 4
Week 5
Week 6
- In-class exercise for ArcGIS (PDF - 1.7 MB)
- In-class exercise for QGIS (PDF - 1.9 MB)
- Take-home exercise (PDF)
|
common_crawl_ocw.mit.edu_143
|
For full bibliographic information, see the Bibliography page. WEEK # TOPICS Readings 0 Getting Started Primary Readings Maantay and Ziegler, GIS for the Urban Environment, Chapter 1 Longley et al., Geographical Information Systems and Science, Chapter 1 Case Study Readings (optional, but helpful for homework) Maantay and Ziegler, Case Studies 1-2 Longley et al., Chapter 2 Helpful Resources for Websites DUSPviz – Intro to HTML and CSS DUSPviz – Portfolios with a Bootstrap Template DUSPviz - Tutorials 1 Cartography and Map Design Primary Readings Slocum, et al., Thematic Cartography and Geographic Visualization, Chapter 1 Maantay and Ziegler, Chapter 4 Visual Guides to Symbols and Classification (optional, but recommended reference for homework) Krygier and Wood, Making Maps, Chapters 8 and 9 Case Study Readings: Power of Visualization Kurgan, Close Up at a Distance, Chapter 9 Kitchin et al., Rethinking Maps, Chapter 1 Optional Reading Corner, “The Agency of Mapping” (great for designers!) Cosgrove, “Carto-City” (great for urbanists!) 2 Spatial Data - Types, Structures, and Representation Primary Readings Maantay and Ziegler, Chapter 4 Longley et al., Chapter 9 Salm, “Visualizing NYC’s MapPLUTO Database” Case Study Readings: Zoning and GIS Bui et al., “40 Percent of the Buildings in Manhattan Could Not Be Built Today” Zandbergen and Hart, “Reducing Housing Options for Convicted Sex Offenders” Web Sites to Explore Municipal Art Society: Accidental Skyline Tool OASIS Map Cambridge Map Viewer Optional Reading (summary, helpful for quick understanding) Krygier and Wood, Chapter 3 3 Quantitative Mapping: Census Data and the ACS Primary Readings: Maantay and Ziegler, Chapter 6 Peters and MacDonald, Urban Policy and the Census, Chapters 1 and 2 (Chapter 3 optional) Badger, “A Census Question That Could Change How Power Is Divided in America” Branigan, “China census could be first to record true population” Case Study: Segregation, Funding, and Schools Turner, “The 50 Most Segregating School Borders In America” Turner, “Why America’s Schools Have A Money Problem” NPR, 2016 Fault Lines Web Map “Fault Lines: America’s Most Segregated School District Borders” (pp. 21-26 mandatory, the rest optional) Web Sites to Explore Census Data Visualization Gallery Bloch et al., “Mapping the 2010 U.S. Census” Aisch et al., “Where We Came From and Where We Went, State by State” Bloch et al., “Mapping America: Every City, Every Block” Gebeloff et al., “Where Poor and Uninsured Americans Live” Bloch et al., “Mapping Uninsured Americans” Community Data Portal, Pratt Center for Community Development Optional Reading Monmonier, How to Lie with Maps, Chapter 10 4 Spatial Data Formats and Geoprocessing Primary Readings Maantay and Ziegler, Chapter 9 Schlossberg, “GIS, the US Census and Neighborhood Scale Analysis” Case Study: Transportation Catchment Areas and GIS Andersen and Landex, “GIS-based Approaches to Catchment Area Analysis of Mass Transit” Frank et al., “Urban Form, Travel Time, and Cost Relationships with Tour Complexity and Mode Choice” Cox, “Region’s Transportation and Land-Use Policies Have Little Effect on Traffic Congestion” Optional Reading Kremer and DeLiberty, “Local Food Practices and Growing Potential” 5 Geocoding and Address Finding - Beginning Raster Decision Making Primary Readings Maantay and Ziegler, Chapter 7 Shkolnikov, “Befriending a Geocoder” Case Study: Geo-coding (for those interested in U.S.-based economic development and arts and culture) Currid and Williams, “The Geography of Buzz” Case Study: Addressing the Favelas (everyone look at this!) “Take a Trip to the Olympics with what3words and RioGo” “Addressing Brazil’s favelas | what3words” Marshall, “Google and Microsoft are Putting Rio’s Favelas on the Map” Reference Reading (to understand geocoding errors and issues Goldberg et al., “From Text to Geographic Goordinates” Zandbergen, “Influence of street reference data on geocoding quality” Hart and Zandbergen, “Reference data and geocoding quality” (good background on geo-coding problems you should be aware of) 6 Raster Decision Making and Suitability Primary Readings Maantay and Ziegler, Chapters 9 and 12 Collins, Steiner, and Richman, “Land-Use Suitability Analysis in the United States” Case Study Kar and Hodgson, “A GIS‐Based Model to Determine Site Suitability of Emergency Evacuation Shelters” Optional Reading Carr and Zwick, “Using GIS Suitability Analysis to Identify Potential Future Land Use Conflicts” 7 Transition to 11.520 [none]
|
common_crawl_ocw.mit.edu_144
|
[For dates of reading assignments, see the Readings page.]
“Addressing Brazil’s favelas | what3words” (YouTube). what3words, March 15, 2017.
Aisch, Gregor, Robert Gebeloff, and Kevin Quealy. “Where We Came From and Where We Went, State by State.” New York Times, Aug. 13, 2014.
Andersen, Jonas Lohmann Elkjaer, and Alex Landex. “GIS-Based Approaches to Catchment Area Analysis of Mass Transit” in ESRI International User Conference (2009).
Badger, Emily. “A Census Question That Could Change How Power Is Divided in America.” The New York Times, July 31, 2018.
Bloch, Matthew, Shan Carter, and Alan McLean. “Mapping America: Every City, Every Block.” New York Times map feature [requires Flash player].
———. “Mapping the 2010 U.S. Census.” New York Times map feature [requires Flash player].
Bloch, Matthew, Matthew Ericson, and Tom Giratikanon. “Mapping Uninsured Americans.” New York Times map feature.
Branigan, Tania. “China Census Could Be First to Record True Population.” The Guardian, Nov. 1, 2010.
Bui, Quoctrung, Matt A.V. Chaban, and Jeremy White. “40 Percent of the Buildings in Manhattan Could Not Be Built Today.” New York Times, May 20, 2016.
Carr, Margaret H., and Paul Zwick. “Using GIS Suitability Analysis to Identify Potential Future Land Use Conflicts in North Central Florida” (PDF). Journal of Conservation Planning 1 (2005): 89–105.
Collins, Michael G., Frederick Steiner, and Michael J. Rushman. “Land-Use Suitability Analysis in the United States: Historical Development and Promising Technological Achievements.” Environmental Management 8:5 (2001): 611–621.
Corner, James. “The Agency of Mapping” in Martin Dodge, Rob Kitchin, and Chris Perkins (eds.), The Map Reader: Theories of Mapping Practice and Cartographic Representation, Wiley (2011). ISBN: 9780470742839.
Cosgrove, Denis. “Carto-City” in Geography and Vision: Seeing and Representing the World, I.B. Tauris, 2008. ISBN: 9781850438472.
Cox, Wendell. “Region’s Transportation and Land-Use Policies Have Little Effect on Traffic Congestion.” The Seattle Times, May 1, 2012.
Currid, Elizabeth, and Sarah Williams. “The Geography of Buzz: Art, Culture and the Social Milieu in Los Angeles and New York” (PDF). Journal of Economic Geography (July 2009): 1–29.
“Fault Lines: America’s Most Segregated School District Borders” (PDF - 3.4 MB), Edbuild, 2016.
Frank, Lawrence D., Mark Bradley, Sarah Kavage, James Chapman, and T. Keith Lawton. “Urban Form, Travel Time, and Cost Relationships with Tour Complexity and Mode Choice.” Transportation 35:1 (2008): 37–54.
Gebeloff, Robert, Haeyoun Park, Matthew Bloch, and Matthew Ericson. “Where Poor and Uninsured Americans Live.” New York Times, Oct. 2, 2013.
Goldberg, Daniel W., John P. Wilson, and Craig A. Knoblock. “From Text to Geographic Coordinates: The Current State of Geocoding” (PDF). URISA Journal 19:1 (2007): 33–46.
Hart, Timothy C., and Paul A. Zandbergen, (2013) “Reference Data and Geocoding Quality: Examining Completeness and Positional Accuracy of Street Geocoded Crime Incidents.” Policing: An International Journal of Police Strategies & Management, 36:2 (2008): 263–294.
Kar, Bandana, and Michael E. Hodgson. “A GIS‐Based Model to Determine Site Suitability of Emergency Evacuation Shelters” (PDF). Transactions in GIS 12.2 (2008): 227–248.
Kitchin, Rob, Chris Perkins, and Martin Dodge. Rethinking Maps: New Frontiers in Cartographic Theory. Routledge (2009). ISBN: 9780415461528.
Kremer, Peleg, and Tracy L. DeLiberty. “Local Food Practices and Growing Potential: Mapping the Case of Philadelphia.” Applied Geography 31:4 (2011): 1252–1261.
Krygier, John, and Denis Wood. Making Maps: A Visual Guide to Map Design for GIS, Second Edition. Guildford Press (2011). ISBN: 9781462509980.
Kurgan, Laura. Close Up at a Distance, Mapping Technology and Politics. Zone Books (2013). ISBN: 9781935408284.
Longley, Paul A., Michael F. Goodchild, David Maguire, and David W. Rhind. Geographical Information Science and Systems, Fourth Edition. ISBN: 9781118676950.
Maantay, Juliana, and John Ziegler, GIS for the Urban Environment. ESRI Press (2006). ISBN: 9781589480827.
MacDonald, Heather, and Alan Peters. Urban Policy and the Census. ESRI Press (2011). ISBN: 9781589482227.
Marshall, Aarian. 2014. “Google and Microsoft are Putting Rio’s Favelas on the Map.” Bloomberg CityLab, Sept. 26, 2014.
Monmonier, Mark. How to Lie with Maps, Second Edition. University of Chicago Press (1996). ISBN: 9780226534213.
Salm, Jon. “Visualizing NYC’s MapPLUTO Database.” Scribble, August 12, 2013.
Schlossberg, Marc. “GIS, the US Census and Neighbourhood Scale Analysis.” Planning, Practice & Research 18:2–3 (May–August 2003): 213–217.
Shkolnikov, Diana. “Befriending a Geocoder.” State of the Map Conference (2016).
Slocum, Terry A., Robert B. McMaster, Fritz C. Kessler, and Hugh H. Howard. Thematic Cartography and Geographic Visualization, Third Edition. Pearson (2009). ISBN: 9780132298346.
“Take a trip to the Olympics with what3words and RioGo.” what3words.com, April 8, 2016.
Turner, Corey. “The 50 Most Segregating School Borders In America.” NPR, Aug. 23, 2016.
———. “Why America’s Schools Have A Money Problem.” NPR, April 18, 2016.
Zandbergen, Paul A. “Influence of Street Reference Data on Geocoding Quality.” Geocarto International 26:1 (2011): 35–47.
Zandbergen, Paul A., and Timothy C. Hart. “Reducing Housing Options for Convicted Sex Offenders: Investigating the Impact of Residency Restriction Laws Using GIS.” Justice Research and Policy 8:2 (2006): 1–24.
|
common_crawl_ocw.mit.edu_145
|
Course Meeting Times
Lectures: 2 sessions / week, 1.5 hours / session
Labs: 4 sessions / week, 2 hours / session
Prerequisites
There are no prerequisites for this course.
Description
Geographic Information Systems (GIS) are tools for managing data that represent the location of features (geographic coordinate data) and what they are like (attribute data); they also provide the ability to query, manipulate, and analyze those data. Because GIS allows one to represent social and environmental data on maps, it has become an important analysis tool used across a variety of fields, including planning, architecture, engineering, public health, environmental science, economics, epidemiology, and business. GIS has become an important political instrument allowing communities and regions to graphically tell their story. GIS is a powerful tool, and this course is meant to introduce students to the basics. Because GIS can be applied to many research fields, this class is meant to give you an understanding of its possibilities.
Learning through Practice
The class will focus on teaching through practical example. All the course exercises will focus on real-world problems confronted by the Bronx River Alliance, an advocacy group for the Bronx River. Exercises will focus on the Bronx River Alliance’s real-world needs, in order to give students a better understanding of how GIS is applied to planning situations.
Relationship between This Course and Its Sequel
11.205 Introduction to Spatial Analysis (this course) and 11.520 GIS Workshop are two modular courses that make up the Introduction to GIS series. 11.205 Introduction to Spatial Analysis is required by the Master in City Planning degree, but students who have a previous background in GIS can test out of this course. 11.520 GIS Workshop focuses on developing a research project using GIS as well as introduction to some advanced topics in data collection and web-mapping. Working on your own GIS project is the best way to learn GIS as it teaches you to apply the concepts you learn beyond the step-by-step tutorial you will learn in class. Students of all GIS backgrounds are welcome to take the GIS Workshop course. Experienced students may be interested in taking the GIS Workshop course, in order to test ideas for thesis or investigate projects that use spatial analysis. Taken together, Introduction to Spatial Analysis and GIS Workshop Course give you a complete set of skills needed to start your own GIS project.
Course Objectives
Students taking this course will:
- Develop an understanding of basic skills necessary to work with Geographic Information Systems (GIS), using ESRI’s ArcGIS software
- Learn about GIS data types
- Learn spatial data visualization techniques and cartography
- Learn about GIS and local government data
- Learn about GIS and census data
- Learn geo-processing tools
- Learn about GIS and decision-making
Assignments and Grading
NO LATE ASSIGNMENTS WILL BE ACCEPTED!! We cannot accept late assignments—the class is too short. If we allow late assignments it holds up grading for all the other students.
Materials
Hard Drive
It is recommended that everyone get an external hard drive to hold data for your assignments and final project. We suggest a hard drive with a minimum of 120 GB of space, but you can find much larger drives, up to a terabyte, at very reasonable prices.
Book
Julie Maantay and John Ziegler, GIS for the Urban Environment. Esri Press, 2006. ISBN: 9781589480827.
Getting Help
There are many, many ways to get help for this class:
[Note: The first three resources listed below are unfortunately not available to OpenCourseWare users.]
Discussion Forum
If you have a question, it is likely that others might have that question too, or have already found a solution to the same issue. We encourage you to post questions to the discussion forum on the online class website first. Both the teaching assistants and lab instructors will be answering questions that arrive at the discussion forum, before we answer questions received via our personal email. So please try to use the discussion board first.
Teaching Assistants and Office Hours
The teaching assistants will have office hours. This will be time in which you can work on assignments and ask the TA’s for help. We strongly suggest taking advantage of TA’s office hours.
GIS Laboratory in the Libraries
Located in Rotch Library, this is a great resource for GIS data and technical questions. The GIS Laboratory collects GIS data and might have data you need for your final project. The GIS lab also has technical consultants available for questions regarding the acquisition of data as well as the technical questions related to performing certain GIS operations. Seek them out.
The Documentation
It’s not a bad idea to read the manual! Both the ArcGIS documentation and the QGIS documentation are available online.
Stack Overflow
A well-known, community-driven, tech help forum, Stack Overflow has become the go-to venue for tech help. It also has a really great GIS help forum!
ESRI GeoNet and User Forums
The old ESRI user forums and the new GeoNet set are great resources for technical GIS software questions.
|
common_crawl_ocw.mit.edu_146
|
Course Meeting Times
Lectures: 2 sessions / week, 1.5 hours / session
Labs: 4 sessions / week, 2 hours / session
Prerequisites
There are no prerequisites for this course.
Description
Geographic Information Systems (GIS) are tools for managing data that represent the location of features (geographic coordinate data) and what they are like (attribute data); they also provide the ability to query, manipulate, and analyze those data. Because GIS allows one to represent social and environmental data on maps, it has become an important analysis tool used across a variety of fields, including planning, architecture, engineering, public health, environmental science, economics, epidemiology, and business. GIS has become an important political instrument allowing communities and regions to graphically tell their story. GIS is a powerful tool, and this course is meant to introduce students to the basics. Because GIS can be applied to many research fields, this class is meant to give you an understanding of its possibilities.
Learning through Practice
The class will focus on teaching through practical example. All the course exercises will focus on real-world problems confronted by the Bronx River Alliance, an advocacy group for the Bronx River. Exercises will focus on the Bronx River Alliance’s real-world needs, in order to give students a better understanding of how GIS is applied to planning situations.
Relationship between This Course and Its Sequel
11.205 Introduction to Spatial Analysis (this course) and 11.520 GIS Workshop are two modular courses that make up the Introduction to GIS series. 11.205 Introduction to Spatial Analysis is required by the Master in City Planning degree, but students who have a previous background in GIS can test out of this course. 11.520 GIS Workshop focuses on developing a research project using GIS as well as introduction to some advanced topics in data collection and web-mapping. Working on your own GIS project is the best way to learn GIS as it teaches you to apply the concepts you learn beyond the step-by-step tutorial you will learn in class. Students of all GIS backgrounds are welcome to take the GIS Workshop course. Experienced students may be interested in taking the GIS Workshop course, in order to test ideas for thesis or investigate projects that use spatial analysis. Taken together, Introduction to Spatial Analysis and GIS Workshop Course give you a complete set of skills needed to start your own GIS project.
Course Objectives
Students taking this course will:
- Develop an understanding of basic skills necessary to work with Geographic Information Systems (GIS), using ESRI’s ArcGIS software
- Learn about GIS data types
- Learn spatial data visualization techniques and cartography
- Learn about GIS and local government data
- Learn about GIS and census data
- Learn geo-processing tools
- Learn about GIS and decision-making
Assignments and Grading
NO LATE ASSIGNMENTS WILL BE ACCEPTED!! We cannot accept late assignments—the class is too short. If we allow late assignments it holds up grading for all the other students.
Materials
Hard Drive
It is recommended that everyone get an external hard drive to hold data for your assignments and final project. We suggest a hard drive with a minimum of 120 GB of space, but you can find much larger drives, up to a terabyte, at very reasonable prices.
Book
Julie Maantay and John Ziegler, GIS for the Urban Environment. Esri Press, 2006. ISBN: 9781589480827.
Getting Help
There are many, many ways to get help for this class:
[Note: The first three resources listed below are unfortunately not available to OpenCourseWare users.]
Discussion Forum
If you have a question, it is likely that others might have that question too, or have already found a solution to the same issue. We encourage you to post questions to the discussion forum on the online class website first. Both the teaching assistants and lab instructors will be answering questions that arrive at the discussion forum, before we answer questions received via our personal email. So please try to use the discussion board first.
Teaching Assistants and Office Hours
The teaching assistants will have office hours. This will be time in which you can work on assignments and ask the TA’s for help. We strongly suggest taking advantage of TA’s office hours.
GIS Laboratory in the Libraries
Located in Rotch Library, this is a great resource for GIS data and technical questions. The GIS Laboratory collects GIS data and might have data you need for your final project. The GIS lab also has technical consultants available for questions regarding the acquisition of data as well as the technical questions related to performing certain GIS operations. Seek them out.
The Documentation
It’s not a bad idea to read the manual! Both the ArcGIS documentation and the QGIS documentation are available online.
Stack Overflow
A well-known, community-driven, tech help forum, Stack Overflow has become the go-to venue for tech help. It also has a really great GIS help forum!
ESRI GeoNet and User Forums
The old ESRI user forums and the new GeoNet set are great resources for technical GIS software questions.
|
common_crawl_ocw.mit.edu_147
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.