1.124J | Fall 2000 | Graduate

Foundations of Software Engineering

Programs that Function as Applets and as Applications

The following example shows how you might write a Java® program, so that it can function either as an applet or as an application. The program can run as an applet because it inherits JApplet and it can run as an application because it has a main routine. The code creates the UI components within a JPanel and then sets this panel to be the content pane for a JApplet or a JFrame. When the program runs as an applet, the JApplet class is used as the top level container. When the program runs as an application, we create a JFrame and use this as the top level container.

MyApp.java

import javax.swing.*;
import java.awt.*;
import java.awt.event.*;
import javax.swing.event.*;

// This class can be run either as an Applet or as an Application.
public class MyApp extends JApplet {
// The RootPaneContainer interface is implemented by JApplet and by JFrame.
// It specifies methods like setContentPane() and getContentPane().  The
// content pane is of type java.awt.Container or one of its subclasses.
RootPaneContainer mRPC;

// This contructor is used when we run as an applet.
public MyApp() {
mRPC = this;
}

// This contructor is used when we run as an application.
public MyApp(JFrame frame) {
mRPC = frame;
}

// The init method is the place to put the code to initialize the applet.  The code to set up the
// user interface usually goes here.  We avoid putting applet initialization code in applet constructors
// because an applet is not guaranteed to have a full environment until the init method is called.
public void init() {
// We will put all our components in a JPanel and then set this panel
// as the content pane for the applet or application.
JPanel panel = new JPanel();
panel.setLayout(new BorderLayout());

JSlider slider = new JSlider(0,50,0);
panel.add(slider, BorderLayout.SOUTH);
final DrawingArea drawingArea = new DrawingArea();
panel.add(drawingArea, BorderLayout.CENTER);

slider.addChangeListener(new ChangeListener() {
public void stateChanged(ChangeEvent e) {
JSlider source = (JSlider)e.getSource();
if (!source.getValueIsAdjusting()) {
int offset = (int)source.getValue();
drawingArea.setOffset(offset);
drawingArea.repaint();
}
}
});

mRPC.setContentPane(panel);
}

// The start method is the place to start the execution of the applet.
// For example, this is where you would tell an animation to start running.
public void start() {
}

// The stop method is the place to stop the execution of the applet.
// This is where you would tell an animation to stop running.
public void stop() {
}

// The destroy method is where you would do any final cleanup that needs to be done.  The
// destroy method is rarely required, since most of the cleanup can usually be done in stop().
public void destroy() {
}

public static void main(String[] args) {
JFrame frame = new JFrame();
final MyApp app = new MyApp(frame);

frame.addWindowListener(new WindowAdapter() {
public void windowClosing(WindowEvent e) {
app.stop();
app.destroy();
System.exit(0);
}
});

app.init();

frame.setSize(400, 400);
frame.setVisible(true);

app.start();
}
}
 

// A user interface component, which is to be added to the applet.
class DrawingArea extends JPanel {
private int mOffset;

public DrawingArea() {
setBackground(Color.white);
}

public void setOffset(int offset) {
mOffset = offset;
}

public void paintComponent(Graphics g) {
super.paintComponent(g);
g.setFont(new Font(“Helvetica”, Font.PLAIN, 24));
g.setColor(Color.green);
g.drawString(“An Applet or an Application?”, 10+mOffset, 50);
g.drawString(“That is the question.”, 10+mOffset, 100);
}
}

mypage.html

<HTML>

<APPLET CODE=MyApp.class WIDTH=400 HEIGHT=400>
</APPLET>

Topics

  1. Custom Painting 
  2. Simple 2D Graphics 
  3. A Graphics Example 

1. Custom Painting

(Ref. Java® Tutorial)

So far, we have seen user interface components that display static content. The individual components posessed sufficient knowledge to draw themselves and so we did not have to do anything special beyond creating the components and describing their layout. If a component is obscured by some other window and then uncovered again, it is the job of the window system to make sure that the component is properly redrawn.

There are instances, however, where we will want to change the appearance of a component e.g. we may wish to draw a graph, display an image or even display an animation within the component. This requires the use of custom painting code. The recommended way to implement custom painting is to extend the JPanel class. We will need to be concerned with two methods:

  • The paintComponent() method specifies what the component should draw. We can override this method to draw text, graphics, etc. The paintComponent() method should never be called directly. It will be called indirectly, either because the window system thinks that the component should draw itself or because we have issued a call to repaint().
  • The repaint() method forces the screen to update as soon as possible. It results in a call to the paintComponent() method. repaint() behaves asynchronously i.e. it returns immediately without waiting for the paintComponent() method to complete.

The following code illustrates how custom painting works. A JPanel subclass is used to listen to mouse events and then display a message at the location where the mouse is pressed or released.

import javax.swing.*;
import java.awt.*;
import java.awt.event.*;

class Main {
public static void main(String[] args0) {
JFrame frame = new JFrame();
frame.setSize(400, 400);

DrawingArea drawingArea = new DrawingArea();
frame.getContentPane().add(drawingArea);

frame.addWindowListener(new WindowAdapter() {
public void windowClosing(WindowEvent e) {
System.exit(0);
}
});
frame.setVisible(true);
}
}

class DrawingArea extends JPanel {
private String mText;
private static String mStr1 = “The mouse was pressed here!”;
private static String mStr2 = “The mouse was released here!”;
private int miX, miY;

// The constructor simply registers the drawing area to receive mouse events from itself.
public DrawingArea() {
addMouseListener(new MouseAdapter() {
public void mousePressed(MouseEvent e) {
miX = e.getX();
miY = e.getY();
mText = mStr1;
repaint();
}
public void mouseReleased(MouseEvent e) {
miX = e.getX();
miY = e.getY();
mText = mStr2;
repaint();
}
});
}

// The paint method.  This gets called in response to repaint().
public void paintComponent(Graphics g) {
super.paintComponent(g);           // This paints the background.
if (mText != null)
g.drawString(mText, miX, miY);
}
}

Note that prior to the introduction of the Swing package, one would override the paint() method to implement custom painting. In Swing applications, however, we override the  paintComponent() method instead. The paintComponent() method will be called by the paint() method in class JComponent. The JComponent’s paint() method also implements features such as double buffering, which are useful in animation.

2. Simple 2D Graphics

The paintComponent() method gives us a graphics context, which is an instance of a Graphics subclass. A graphics context bundles information such as the area into which we can draw, the font and color to be used, the clipping region, etc. Note that we do not instantiate the graphics context in our program; in fact the Graphics class itself is an abstract class. The Graphics class provides methods for drawing simple graphics primitives, like lines, rectangles, ovals, arcs and polygons. It also provides methods for drawing text, as we saw above.

This program illustrates how to draw some basic shapes.

import javax.swing.*;
import java.awt.*;
import java.awt.event.*;

class Main {
public static void main(String[] args0) {
JFrame frame = new JFrame();
frame.setSize(400, 400);

DrawingArea drawingArea = new DrawingArea();
frame.getContentPane().add(drawingArea);
frame.addWindowListener(new WindowAdapter() {
public void windowClosing(WindowEvent e) {
System.exit(0);
}
});
frame.setVisible(true);
}
}

class DrawingArea extends JPanel {
public void paintComponent(Graphics g) {
super.paintComponent(g);

// Draw some simple geometric primitives.
g.setColor(Color.red);
g.drawLine(10, 10, 40, 50);                                            // x1, y1, x2, y2

g.setColor(Color.green);
g.drawRect(100, 100, 40, 30);                                        // x, y, width, height

g.setColor(Color.yellow);
g.drawOval(100, 200, 30, 50);                                        // x, y, width, height

g.setColor(Color.blue);
g.drawArc(200, 200, 50, 30, 45, 90);                             // x, y, width, height, start angle, arc angle

int x1_points[] = {100, 130, 140, 115, 90};
int y1_points[] = {300, 300, 340, 370, 340};
g.setColor(Color.black);
g.drawPolygon(x1_points, y1_points, x1_points.length);   // x array, y array, length

int x2_points[] = {300, 330, 340, 315, 290};
int y2_points[] = {300, 300, 340, 370, 340};
g.setColor(Color.cyan);
g.drawPolyline(x2_points, y2_points, x2_points.length);    // x array, y array, length

g.setColor(Color.orange);
g.fillRect(300, 100, 40, 30);                                             // x, y, width, height

g.setColor(Color.magenta);
g.fill3DRect(300, 200, 40, 30, true);                                // x, y, width, height, raised
}
}

The Java® 2D API provides a range of advanced capabilities, such as stroking and filling, affine transformations, compositing and transparency.

3. A Graphics Example

Here is a complete program that allows you to interactively define points, lines and polygon using mouse input. This program can be run either as an application or as an applet.

// This is a Java graphics example that can be run either as an applet or as an application.
// Created by Kevin Amaratunga 10/17/1997.  Converted to Swing 10/17/1999.

import javax.swing.*;
import java.awt.*;
import java.awt.event.*;
import java.util.*;

// In order to run as an applet, class Geometry must be declared as a public class.  Note that there
// cannot  be more than one public class in a .java file.  Also, the public class must have the same
// same name as the .java file.
public class Geometry extends JApplet {
JTextArea mTextArea;
DrawingArea mDrawingArea;

public Geometry() {
// Get the applet’s container.
Container c = getContentPane();

// Choose a layout manager.  BorderLayout is a straightforward one to use.
c.setLayout(new BorderLayout());

// Create a drawing area and add it to the center of the applet.
mDrawingArea = new DrawingArea(this);
c.add(“Center”, mDrawingArea);

// Create a read only text area to be used for displaying
// information.  Add it to the bottom of the applet.
mTextArea = new JTextArea();
JScrollPane scrollPane = new JScrollPane(mTextArea);
scrollPane.setPreferredSize(new Dimension(600, 100));
mTextArea.setEditable(false);
c.add(“South”, scrollPane);
}

public JTextArea getTextArea() {
return mTextArea;
}

public static void main(String args[]) {
// Create the applet object.
Geometry geomApplet = new Geometry();

// Create a frame.  Then set its size and title.
JFrame frame = new JFrame();
frame.setSize(600, 600);
frame.setTitle(geomApplet.getClass().getName());

// Make the frame closable.
WindowListener listener = new WindowAdapter() {
// An anonymous class that extends WindowAdapter.
public void windowClosing(WindowEvent e) {
System.out.println(“Window closing”);
System.exit(0);
}
};
frame.addWindowListener(listener);

// Add the applet to the center of the frame.
frame.getContentPane().add(“Center”, geomApplet);

// Initialize the applet.
geomApplet.init();

// Make the frame visible.
frame.setVisible(true);

// Start the applet.
geomApplet.start();
}
}
 

// The drawing area is the area within all the objects will be drawn.
class DrawingArea extends JPanel implements MouseListener {
// Parent and child widgets.
Geometry mGeomApplet;                     // The parent applet.
JPopupMenu mPopupMenu;                 // Popup menu for creating new objects.

// Object lists.
Vector mPointList;                             // List of all Point objects.
Vector mLineList;                               // List of all Line objects.
Vector mPolygonList;                         // List of all Polygon objects.

// Constants that indicate which kind of object (if any) is currently being created.
static final int NO_OBJECT = 0;
static final int POINT_OBJECT = 1;
static final int LINE_OBJECT = 2;
static final int POLYGON_OBJECT = 3;

// Miscellaneous state variables.
int miLastButton = 0;                        // Last button for which an event was received.
int miAcceptingInput = 0;                  // Type of object (if any) that we are currently creating.
int miPointsEntered = 0;                   // Number of points entered for this object so far.
Object mCurrentObject = null;           // The object that we are currently creating.
 

// DrawingArea constructor.
DrawingArea(Geometry geomApplet) {
JMenuItem menuItem;

mGeomApplet = geomApplet;

// Set the background color.
setBackground(Color.white);

// Register the drawing area to start listening to mouse events.
addMouseListener(this);

// Create a popup menu and make it a child of the drawing area, but don’t show it just yet.
mPopupMenu = new JPopupMenu(“New Object”);
menuItem = new JMenuItem(“Point”);
menuItem.addActionListener(new PointActionListener(this));
mPopupMenu.add(menuItem);
menuItem = new JMenuItem(“Line”);
menuItem.addActionListener(new LineActionListener(this));
mPopupMenu.add(menuItem);
menuItem = new JMenuItem(“Polygon”);
menuItem.addActionListener(new PolygonActionListener(this));
mPopupMenu.add(menuItem);
add(mPopupMenu);

// Create the object lists with a reasonable initial capacity.
mPointList = new Vector(10);
mLineList = new Vector(10);
mPolygonList = new Vector(10);
}
 

// The paint method.
public void paintComponent(Graphics g) {
int i;

// Paint the background.
super.paintComponent(g);

// Draw all objects that are stored in the object lists.
for (i = 0; i < mPointList.size(); i++) {
Point point = (Point)mPointList.elementAt(i);
g.fillRect(point.x-1, point.y-1, 3, 3);
}

for (i = 0; i < mLineList.size(); i++) {
Line line = (Line)mLineList.elementAt(i);
line.draw(g);
}

for (i = 0; i < mPolygonList.size(); i++) {
Polygon polygon = (Polygon)mPolygonList.elementAt(i);
int j;

g.setColor(Color.red);
g.drawPolygon(polygon);
g.setColor(Color.black);
for (j = 0; j < polygon.npoints; j++) {
g.fillRect(polygon.xpoints[j], polygon.ypoints[j], 3, 3);
}
}

// Draw as much of the current object as available.
switch (miAcceptingInput) {
case LINE_OBJECT:
Line line = (Line)mCurrentObject;
if (line.mb1 && !line.mb2)
g.fillRect(line.mEnd1.x-1, line.mEnd1.y-1, 3, 3);
break;

case POLYGON_OBJECT:
Polygon polygon = (Polygon)mCurrentObject;
int j;
g.setColor(Color.red);
g.drawPolyline(polygon.xpoints, polygon.ypoints, polygon.npoints);
g.setColor(Color.black);
for (j = 0; j < polygon.npoints; j++) {
g.fillRect(polygon.xpoints[j], polygon.ypoints[j], 3, 3);
}
break;

default:
break;
}

// Draw some text at the top of the drawing area.
int w = getSize().width;
int h = getSize().height;
g.drawRect(0, 0, w - 1, h - 1);
g.setFont(new Font(“Helvetica”, Font.PLAIN, 15));
g.drawString(“Drawing area”, (w - g.getFontMetrics().stringWidth(“Drawing area”))/2, 10);
}
 

// The next five methods are required, since we implement the
// MouseListener interface.  We are only interested in mouse pressed
// events.
public void mousePressed(MouseEvent e) {
int iX = e.getX();  // The x and y coordinates of the
int iY = e.getY();  // mouse event.
int iModifier = e.getModifiers();

if ((iModifier & InputEvent.BUTTON1_MASK) != 0) {
miLastButton = 1;

// If we are currently accepting input for a new object,
// then add the current point to the object.
if (miAcceptingInput != NO_OBJECT)
addPointToObject(iX, iY);
}
else if ((iModifier & InputEvent.BUTTON2_MASK) != 0) {
miLastButton = 2;

}
else if ((iModifier & InputEvent.BUTTON3_MASK) != 0) {
miLastButton = 3;

if (miAcceptingInput == NO_OBJECT) {
// Display the popup menu provided we are not accepting
// any input for a new object.
mPopupMenu.show(this, iX, iY);
}
else if (miAcceptingInput == POLYGON_OBJECT) {
// If current object is a polygon, finish it.
mPolygonList.addElement(mCurrentObject);
String str = “Finished creating polygon object.\n”;
mGeomApplet.getTextArea().append(str);
mGeomApplet.repaint();
miAcceptingInput = NO_OBJECT;
miPointsEntered = 0;
mCurrentObject = null;
}
}
}

public void mouseClicked(MouseEvent e) {}

public void mouseEntered(MouseEvent e) {}

public void mouseExited(MouseEvent e) {}

public void mouseReleased(MouseEvent e) {}

public void getPointInput() {
miAcceptingInput = POINT_OBJECT;
mCurrentObject = (Object)new Point();
mGeomApplet.getTextArea().append(“New point object: enter point.\n”);
}

public void getLineInput() {
miAcceptingInput = LINE_OBJECT;
mCurrentObject = (Object)new Line();
mGeomApplet.getTextArea().append(“New line: enter end points.\n”);
}

public void getPolygonInput() {
miAcceptingInput = POLYGON_OBJECT;
mCurrentObject = (Object)new Polygon();
mGeomApplet.getTextArea().append(“New polygon: enter vertices “);
mGeomApplet.getTextArea().append("(click right button to finish).\n”);
}

void addPointToObject(int iX, int iY) {
String str;

miPointsEntered++;
switch (miAcceptingInput) {
case POINT_OBJECT:
str = “Point at (” + iX + “,” + iY + “)\n”;
mGeomApplet.getTextArea().append(str);
Point point = (Point)mCurrentObject;
point.x = iX;
point.y = iY;
mPointList.addElement(mCurrentObject);
str = “Finished creating point object.\n”;
mGeomApplet.getTextArea().append(str);
mGeomApplet.repaint();
miAcceptingInput = NO_OBJECT;
miPointsEntered = 0;
mCurrentObject = null;
break;

case LINE_OBJECT:
if (miPointsEntered <= 2) {
str = “End " + miPointsEntered + " at (” + iX + “,” + iY + “)”;
str += “\n”;
mGeomApplet.getTextArea().append(str);
}
Line line = (Line)mCurrentObject;
if (miPointsEntered == 1) {
line.setEnd1(iX, iY);
mGeomApplet.repaint();
}
else {
if (miPointsEntered == 2) {
line.setEnd2(iX, iY);
mLineList.addElement(mCurrentObject);
str = “Finished creating line object.\n”;
mGeomApplet.getTextArea().append(str);
mGeomApplet.repaint();
}
miAcceptingInput = NO_OBJECT;
miPointsEntered = 0;
mCurrentObject = null;
}
break;

case POLYGON_OBJECT:
str = “Vertex " + miPointsEntered + " at (” + iX + “,” + iY + “)”;
str += “\n”;
mGeomApplet.getTextArea().append(str);
Polygon polygon = (Polygon)mCurrentObject;
polygon.addPoint(iX, iY);
mGeomApplet.repaint();
break;

default:
break;
}                           // End switch.
}
}
 

// Action listener to create a new Point object.
class PointActionListener implements ActionListener {
DrawingArea mDrawingArea;

PointActionListener(DrawingArea drawingArea) {
mDrawingArea = drawingArea;
}

public void actionPerformed(ActionEvent e) {
mDrawingArea.getPointInput();
}
}
 

// Action listener to create a new Line object.
class LineActionListener implements ActionListener {
DrawingArea mDrawingArea;

LineActionListener(DrawingArea drawingArea) {
mDrawingArea = drawingArea;
}

public void actionPerformed(ActionEvent e) {
mDrawingArea.getLineInput();
}
}
 

// Action listener to create a new Polygon object.
class PolygonActionListener implements ActionListener {
DrawingArea mDrawingArea;

PolygonActionListener(DrawingArea drawingArea) {
mDrawingArea = drawingArea;
}

public void actionPerformed(ActionEvent e) {
mDrawingArea.getPolygonInput();
}
}
 

// A line class.
class Line {
Point mEnd1, mEnd2;
boolean mb1, mb2;

Line() {
mb1 = mb2 = false;
mEnd1 = new Point();
mEnd2 = new Point();
}

void setEnd1(int iX, int iY) {
mEnd1.x = iX;
mEnd1.y = iY;
mb1 = true;
}

void setEnd2(int iX, int iY) {
mEnd2.x = iX;
mEnd2.y = iY;
mb2 = true;
}

void draw(Graphics g) {
g.fillRect(mEnd1.x-1, mEnd1.y-1, 3, 3);
g.fillRect(mEnd2.x-1, mEnd2.y-1, 3, 3);
g.setColor(Color.green);
g.drawLine(mEnd1.x, mEnd1.y, mEnd2.x, mEnd2.y);
g.setColor(Color.black);
}
}

Contents

  1. Creating and Destroying Objects - Constructors and Destructors
  2. The new and delete Operators
  3. Scope and the Lifetime of Objects
  4. Data Structures for Managing Objects

1. Creating and Destroying Objects - Constructors and Destructors

(Ref. Lippman 14.1-14.3)

Let’s take a closer look at how constructors and destructors work.


A Point Class

Here is a complete example of a Point class. We have organized the code into three separate files:

point.h contains the declaration of the class, which describes the structure of a Point object.

point.C contains the definition of the class i.e. the actual implementation of the methods.

point_test.C is a program that uses the Point class.

Our Point class has three constructors and one destructor.

Point();                               // The default constructor.
Point(float fX, float fY);       // A constructor that takes two floats.
Point(const Point& p);         // The copy constructor.
~Point();                             // The destructor.

These constructors can be respectively invoked by object definitions such as

Point a;
Point b(1.0, 2.0);
Point c(b);

The default constructor, Point(), is so named because it can be invoked without any arguments. In our example, the default constructor initializes the Point to (0,0). The second constructor creates a Point from a pair of coordinates of type float. Note that we could combine these two constructors into a single constructor which has default arguments:

Point(float fX=0.0, float fY=0.0);

The third constructor is known as a copy constructor since it creates one Point from another. The object that we want to clone is passed in as a constant reference. Note that we cannot pass by value in this instance because doing so would lead to an unterminated recursive call to the copy constructor. In this example, the destructor does not have to perform any clean-up operations. Later on, we will see examples where the destructor has to release dynamically allocated memory.

Constructors and destructors can be triggered more often than you may imagine. For example, each time a Point is passed to a function by value, a local copy of the object is created. Likewise, each time a Point is returned by value, a temporary copy of the object is created in the calling program. In both cases, we will see an extra call to the copy constructor, and an extra call to the destructor. You are encouraged to put print statements in every constructor and in the destructor, and then carefully observe what happens.
 

point.h

// Declaration of class Point.

#ifndef _POINT_H_
#define _POINT_H_

#include <iostream.h>

class Point {
// The state of a Point object. Property variables are typically
// set up as private data members, which are read from and
// written to via public access methods.
private:
float mfX;
float mfY;

// The behavior of a Point object.
public:
Point();                               // The default constructor.
Point(float fX, float fY);       // A constructor that takes two floats.
Point(const Point& p);         // The copy constructor.
~Point();                             // The destructor.
void print() {                       // This function will be made inline by default.
cout << “(” << mfX << “,” << mfY << “)” << endl;
}
void set_x(float fX);
float get_x();
void set_y(float fX);
float get_y();
};

#endif // _POINT_H_

point.C

// Definition class Point.

#include “point.h”

// A constructor which creates a Point object at (0,0).
Point::Point() {
cout << “In constructor Point::Point()” << endl;
mfX = 0.0;
mfY = 0.0;
}

// A constructor which creates a Point object from two
// floats.
Point::Point(float fX, float fY) {
cout << “In constructor Point::Point(float fX, float fY)” << endl;
mfX = fX;
mfY = fY;
}

// A constructor which creates a Point object from
// another Point object.
Point::Point(const Point& p) {
cout << “In constructor Point::Point(const Point& p)” << endl;
mfX = p.mfX;
mfY = p.mfY;
}

// The destructor.
Point::~Point() {
cout << “In destructor Point::~Point()” << endl;
}

// Modifier for x coordinate.
void Point::set_x(float fX) {
mfX = fX;
}

// Accessor for x coordinate.
float Point::get_x() {
return mfX;
}

// Modifier for y coordinate.
void Point::set_y(float fY) {
mfY = fY;
}

// Accessor for y coordinate.
float Point::get_y() {
return mfY;
}

point_test.C

// Test program for the Point class.

#include “point.h”

int main() {
Point a;
Point b(1.0, 2.0);
Point c(b);

// Print out the current state of all objects.
a.print();
b.print();
c.print();

b.set_x(3.0);
b.set_y(4.0);

// Print out the current state of b.
cout << endl;
b.print();

return 0;
}

2. The new and delete Operators

(Ref. Lippman 4.9, 8.4)

Until now, we have only considered situations in which the exact number of objects to be created is known at compile time. This is rarely the case in real world software. A web-browser cannot predict in advance how many image objects it will find on a web page. What is needed, therefore, is a way to dynamically create and destroy objects at run time. C++ provides two operators for this purpose:

The new operator allows us to allocate memory for one or more objects. It is similar to the malloc() function in the C standard library.

The delete operator allows us to release memory that has previously been allocated using new. It is similar to the free() function in the C standard library. Note that it is an error to apply the delete operator to memory allocated by any means other than new.

We can allocate single objects using statements such as

a = new Point();
b = new Point(2.0, 3.0);

Object arrays can be allocated using statements such as

c = new Point[num_points];

In either case, new returns the starting address of the memory it has allocated, so a, b, and c must be defined as pointer types, Point *. A single object can be released using a statement such as

  delete a;

When releasing memory associated with an array, it is important to remember to use the following notation:

delete[] c;

If the square brackets are omitted, only the first object in the array will be released, and the memory associated with the rest of the objects will be leaked.
 

nd_test.C

// Test program for the new and delete operators.

#include “point.h”

int main() {
int num_points;
Point *a, *b, *c;
float d;

// Allocate a single Point object in heap memory. This invokes the default constructor.
a = new Point();

// This invokes a constructor that has two arguments.
b = new Point(2.0, 3.0);

// Print out the two point objects.
cout << “Here are the two Point objects I have created:” << endl;
a->print();
b->print();

// Destroy the two Point objects.
delete a;
delete b;

// Now allocate an array of Point objects in heap memory.
cout << “I will now create an array of Points. How big shall I make it? “;
cin » num_points;
c = new Point[num_points];

for (int i = 0; i < num_points; i++) {
d = (float)i;
c[i].set_x(d);
c[i].set_y(d + 1.0);
}

// Print out the array of point objects.
cout << “Here is the array I have created:” << endl;
for (int i = 0; i < num_points; i++) {
c[i].print();
}

// Destroy the array of Point objects.
delete[] c;                 // What happens if [] is omitted?

return 0;

3. Scope and the Lifetime of Objects

(Ref. Lippman 8.1-8.4)

There are three fundamental ways of using memory in C and C++.

  • Static memory. This is memory allocated by the linker for the duration of the program. Global variables and objects explicitly defined as static fall into this category.
  • Automatic memory. Objects that are allocated in automatic memory are destroyed automatically when they go out of scope. Examples are local variables and function arguments. Objects that reside in automatic memory are said to be allocated on the stack.
  • Dynamic memory. Memory allocated using the new operator (or malloc()) falls into this category. Dynamic memory must be explicitly released using the delete operator (or free(), as appropriate.) Objects that reside in dynamic memory are said to be allocated on the heap.

A garbage collector is a memory manager that automatically identifies unreferenced objects in dynamic memory and then reclaims that memory. The C and C++ standards do not require the implementation of automatic garbage collection, however, garbage collectors are sometimes implemented in large scale projects where it can be difficult to keep track of memory explicitly.

The following program illustrates various uses of memory. Note that the static object in the function foo() is only allocated once, even though foo() is invoked multiple times.
 

sl_test.C

// Test program for scope and the lifetime of objects.

#include “point.h”

Point a(1.0, 2.0);                            // Resides in static memory.

void foo() {
static Point a;                             // Resides in static memory.

a.set_x(a.get_x() + 1.0);
a.print();
}

int main() {
Point a(4.0, 3.0);                        // Resides in automatic memory.

a.print();
::a.print();

for (int i = 0; i < 3; i++)
foo();

Point *b = new Point(5.0, 6.0);    // Resides in heap memory.
b->print();
delete b;

// Curly braces serve as scope delimiters.
{
Point a(7.0, 9.0);                     // Resides in automatic memory.

a.print();
::a.print();
}

return 0;
}
 

Here is the output from the program:

In constructor Point::Point(float fX, float fY)                                     <– Global object a.
In constructor Point::Point(float fX, float fY)                                     <– Local object a.
(4,3)
(1,2)
In constructor Point::Point()                                                              <– Object a in foo().
(1,0)
(2,0)
(3,0)
In constructor Point::Point(float fX, float fY)                                     <– Object *b.
(5,6)
In destructor Point::~Point()                                                             <– Object *b.
In constructor Point::Point(float fX, float fY)                                     <– Second local object a.
(7,9)
(1,2)
In destructor Point::~Point()                                                             <– Second local object a.
In destructor Point::~Point()                                                             <– Local object a.
In destructor Point::~Point()                                                             <– Object a in foo().
In destructor Point::~Point()                                                             <– Global object a.

4. Data Structures for Managing Objects

We have already seen an example of how to dynamically create an array of objects. This may not be the best approach for managing a collection of objects that is constantly changing, since we may wish to delete a single object while retaining the rest. Instead, we might consider using an array of pointers to hold individually allocated objects, as illustrated in the following example. Even this approach has its limitations since we need to know how big to make the pointer array. In general, a linked list is the data structure of choice, since it makes no assumptions about the maximum number of objects to be stored. We will see an example of a linked list later.
 

pa_test.C

// Pointer array test program.

#include “point.h”

int main() {
int i, max_points;
Point **a;

max_points = 5;

// Create an array of pointers to Point objects. We will use the
// array elements to hold on to dynamically allocated Point objects.
a = new Point *[max_points];

// Now create some point objects and store them in the array.
for (i = 0; i < max_points; i++)
a[i] = new Point(i, i);

// Let’s suppose we want to eliminate the middle Point.
i = (max_points-1) / 2;
delete a[i];
a[i] = NULL;

// Print out the remaining Points.
for (i = 0; i < max_points; i++) {
if (a[i])
a[i]->print();
}

// Delete the remaining Points. Note that it is acceptable to pass a NULL
// pointer to the delete operator.
for (i = 0; i < max_points; i++)
delete a[i];

// Now delete the array of pointers.
delete[] a;

return 0;
}

Topics

  1. Introduction
  2. Text Input
  3. Text Output
  4. Binary Input and Output

1. Introduction

Java® uses a stream-based approach to input and output. A stream in this context is a flow of data, which could either be read in from a data source (e.g. file, keyboard or socket) or written to a data sink (e.g file, screen, or socket). Java® currently supports two types of streams:

  • 8-bit streams. These are intended for binary data i.e. data that will be manipulated at the byte level. The abstract base classes for 8-bit streams are InputStream and OutputStream.
  • 16-bit streams. These are intended for character data. 16-bits streams are required becuase Java®’s internal representation for characters is the 16-bit Unicode format rather than the 8-bit ASCII format. The abstract base classes for 16-bit streams are Reader and Writer.

It is possible to create a 16-bit Reader from an 8-bit InputStream using the InputStreamReader class e.g.

Reader r = new InputStreamReader(System.in);      // System.in is an example of an InputStream.

Likewise, it is possible to create a 16-bit Writer from an 8-bit OutputStream using the OutputStreamWriter class e.g.

Writer w = new OutputStreamWriter(System.out);     // System.out is an example of an OutputStream.

2. Text Input

The FileReader class is used to read characters from a file. This class can only read one 16-bit Unicode character at a time (characters that are stored in 8-bit ASCII will be automatically promoted to Unicode.) In order to read a full line of text at once, we must layer a BufferedReader on top of the FileReader. Next, the individual words in the line of text can be extracted using a StringTokenizer. If the text contains numbers, we must also perform String to Number conversion operations, like Integer.parseInt() and Double.parseDouble().

import java.io.*;
import java.util.*;

public class Main {
public static void main(String[] args) {
try {
readText(args[0]);
}
catch (IOException e) {
e.printStackTrace();
}
}

// This function will read data from an ASCII text file.
public static void readText(String fileName) throws IOException {
// First create a FileReader.  A Reader is a 16-bit input stream,
// which is intended for all forms of character (text) input.
Reader reader = new FileReader(fileName);

// Now create a BufferedReader from the Reader.  This allows us to
// read in an entire line at a time.
BufferedReader bufferedReader = new BufferedReader(reader);
String nextLine;

while ((nextLine = bufferedReader.readLine()) != null) {
// Next, we create a StringTokenizer from the line we have just
// read in.  This permits the extraction of nonspace characters.
StringTokenizer tokenizer = new StringTokenizer(nextLine);

// We can now extract various data types as follows.
String companyName = tokenizer.nextToken();
int numberShares = Integer.parseInt(tokenizer.nextToken());
double sharePrice = Double.parseDouble(tokenizer.nextToken());

// Print the data out on the screen.
System.out.print(companyName + " has " + numberShares);
System.out.println(" million shares valued at $" + sharePrice);

// Close the file.
bufferedReader.close();
}
}
}

This program can be easily converted to read in data from the keyboard. Simply replace

Reader reader = new FileReader(fileName);

with

    Reader = new InputStreamReader(System.in);

3. Text Output

The FileWriter class is used to write text to a file. This class is only capable of writing out individual characters and strings. We can layer a PrintWriter on top of the FileWriter, so that we can write out numbers as well.

import java.io.*;
import java.util.*;
import java.text.*;

public class Main {
public static void main(String[] args) {
try {
writeText(args[0]);
}
catch (IOException e) {
e.printStackTrace();
}
}

// This function will write data to an ASCII text file.
public static void writeText(String fileName) throws IOException {
// First create a FileWriter.  A Writer is a 16-bit output stream,
// which is intended for all forms of character (text) output.
Writer writer = new FileWriter(fileName);

// Next create a PrintWriter from the Writer.  This allows us to
// print out other data types besides characters and Strings.
PrintWriter printWriter = new PrintWriter(writer);

// Now print out various data types.
boolean b = true;
int i = 20;
double d = 1.124;
String str = “This is some text.”;

printWriter.print(b);
printWriter.print(i);
printWriter.print(d);
printWriter.println("\n" + str);

// This is an example of formatted output.  In the format string,
// 0 and # represent digits.  # means that the digit should not
// be displayed if it is 0.
DecimalFormat df = new DecimalFormat("#.000");
printWriter.println(df.format(200.0));  // 200.000
printWriter.println(df.format(0.123));  // .123

// This will flush the PrintWriter’s internal buffer, causing the
// data to be actually written to file.
printWriter.flush();

// Finally, close the file.
printWriter.close();
}
}

4. Binary Input and Output

Binary input and output is done using the 8-bit streams. To read binary data from a file, we create a FileInputStream and then layer a DataInputStream on top of it. To write binary data to a file, we create a FileOutputStream and then layer a DataOutputStream on top of it. The following example illustrates this.

import java.io.*;

public class Main {
public static void main(String[] args) {
try {
writeBinary(args[0]);
readBinary(args[0]);
}
catch (IOException e) {
e.printStackTrace();
}
}

// This function will write binary data to a file.
public static void writeBinary(String fileName) throws IOException {
// First create a FileOutputStream.
OutputStream outputStream = new FileOutputStream(fileName);

// Now layer a DataOutputStream on top of it.
DataOutputStream dataOutputStream = new DataOutputStream(outputStream);

// Now write out some data in binary format.  Strings are written out
// in UTF format, which is a bridge between ASCII and Unicode.
int i = 5;
double d = 1.124;
char c = ‘z’;
String str = “Some text”;

dataOutputStream.writeInt(i);           // Increases file size by 4 bytes.
dataOutputStream.writeDouble(d);   // Increases file size by 8 bytes.
dataOutputStream.writeChar(c);      // Increases file size by 2 bytes.
dataOutputStream.writeUTF(str);     // Increases file size by 2+9 bytes.

// Close the file.
dataOutputStream.close();
}

// This function will read binary data from a file.
public static void readBinary(String fileName) throws IOException {
// First create a FileInputStream.
InputStream inputStream = new FileInputStream(fileName);

// Now layer a DataInputStream on top of it.
DataInputStream dataInputStream = new DataInputStream(inputStream);

// Now read in data from the binary file.
int i;
double d;
char c;
String str;

i = dataInputStream.readInt();
d = dataInputStream.readDouble();
c = dataInputStream.readChar();
str = dataInputStream.readUTF();

System.out.print(“integer " + i + " double " + d);
System.out.println(” char " + c + " String " + str);

// Close the file.
dataInputStream.close();
}
}

Table of Contents

Overview of make ----------------

The make utility automatically determines which pieces of a large program need to be recompiled, and issues commands to recompile them. This manual describes GNU make, which was implemented by Richard Stallman and Roland McGrath. Development since Version 3.76 has been handled by Paul D. Smith.

GNU make conforms to section 6.2 of IEEE Standard 1003.2-1992 (POSIX.2).

Our examples show C programs, since they are most common, but you can use make with any programming language whose compiler can be run with a shell command. Indeed, make is not limited to programs. You can use it to describe any task where some files must be updated automatically from others whenever the others change.

To prepare to use make, you must write a file called the makefile that describes the relationships among files in your program and provides commands for updating each file. In a program, typically, the executable file is updated from object files, which are in turn made by compiling source files.

Once a suitable makefile exists, each time you change some source files, this simple shell command:

make

suffices to perform all necessary recompilations. The make program uses the makefile data base and the last-modification times of the files to decide which of the files need to be updated. For each of those files, it issues the commands recorded in the data base.

You can provide command line arguments to make to control which files should be recompiled, or how. See section How to Run make.

How to Read This Manual

If you are new to make, or are looking for a general introduction, read the first few sections of each chapter, skipping the later sections. In each chapter, the first few sections contain introductory or general information and the later sections contain specialized or technical information. The exception is section An Introduction to Makefiles, all of which is introductory.

If you are familiar with other make programs, see section Features of GNU make, which lists the enhancements GNU make has, and section Incompatibilities and Missing Features, which explains the few things GNU make lacks that others have.

For a quick summary, see section Summary of Options, section Quick Reference, and section Special Built-in Target Names.

Problems and Bugs

If you have problems with GNU make or think you’ve found a bug, please report it to the developers; we cannot promise to do anything but we might well want to fix it.

Before reporting a bug, make sure you’ve actually found a real bug. Carefully reread the documentation and see if it really says you can do what you’re trying to do. If it’s not clear whether you should be able to do something or not, report that too; it’s a bug in the documentation!

Before reporting a bug or trying to fix it yourself, try to isolate it to the smallest possible makefile that reproduces the problem. Then send us the makefile and the exact results make gave you. Also say what you expected to occur; this will help us decide whether the problem was really in the documentation.

Once you’ve got a precise problem, please send electronic mail to:

bug-make@gnu.org

Please include the version number of make you are using. You can get this information with the command make --version. Be sure also to include the type of machine and operating system you are using. If possible, include the contents of the file config.h that is generated by the configuration process.

An Introduction to Makefiles

You need a file called a makefile to tell make what to do. Most often, the makefile tells make how to compile and link a program.

In this chapter, we will discuss a simple makefile that describes how to compile and link a text editor which consists of eight C source files and three header files. The makefile can also tell make how to run miscellaneous commands when explicitly asked (for example, to remove certain files as a clean-up operation). To see a more complex example of a makefile, see section Complex Makefile Example.

When make recompiles the editor, each changed C source file must be recompiled. If a header file has changed, each C source file that includes the header file must be recompiled to be safe. Each compilation produces an object file corresponding to the source file. Finally, if any source file has been recompiled, all the object files, whether newly made or saved from previous compilations, must be linked together to produce the new executable editor.

What a Rule Looks Like

 

A simple makefile consists of “rules” with the following shape:

 

target … : prerequisites … command … …

A target is usually the name of a file that is generated by a program; examples of targets are executable or object files. A target can also be the name of an action to carry out, such as clean (see section Phony Targets).

A prerequisite is a file that is used as input to create the target. A target often depends on several files.

A command is an action that make carries out. A rule may have more than one command, each on its own line. Please note: you need to put a tab character at the beginning of every command line! This is an obscurity that catches the unwary.

Usually a command is in a rule with prerequisites and serves to create a target file if any of the prerequisites change. However, the rule that specifies commands for the target need not have prerequisites. For example, the rule containing the delete command associated with the target clean does not have prerequisites.

A rule, then, explains how and when to remake certain files which are the targets of the particular rule. make carries out the commands on the prerequisites to create or update the target. A rule can also explain how and when to carry out an action. See section Writing Rules.

A makefile may contain other text besides rules, but a simple makefile need only contain rules. Rules may look somewhat more complicated than shown in this template, but all fit the pattern more or less.

A Simple Makefile

 

Here is a straightforward makefile that describes the way an executable file called edit depends on eight object files which, in turn, depend on eight C source and three header files.

In this example, all the C files include defs.h, but only those defining editing commands include command.h, and only low level files that change the editor buffer include buffer.h.

edit : main.o kbd.o command.o display.o \ insert.o search.o files.o utils.o cc -o edit main.o kbd.o command.o display.o \ insert.o search.o files.o utils.omain.o : main.c defs.h cc -c main.ckbd.o : kbd.c defs.h command.h cc -c kbd.c command.o : command.c defs.h command.h cc -c command.c display.o : display.c defs.h buffer.h cc -c display.c insert.o : insert.c defs.h buffer.h cc -c insert.c search.o : search.c defs.h buffer.h cc -c search.c files.o : files.c defs.h buffer.h command.h cc -c files.c utils.o : utils.c defs.h cc -c utils.cclean : rm edit main.o kbd.o command.o display.o \ insert.o search.o files.o utils.o

We split each long line into two lines using backslash-newline; this is like using one long line, but is easier to read.

To use this makefile to create the executable file called edit, type:

make

To use this makefile to delete the executable file and all the object files from the directory, type:

make clean

In the example makefile, the targets include the executable file edit, and the object files main.o and kbd.o. The prerequisites are files such as main.c and defs.h. In fact, each .o file is both a target and a prerequisite. Commands include cc -c main.c and cc -c kbd.c.

When a target is a file, it needs to be recompiled or relinked if any of its prerequisites change. In addition, any prerequisites that are themselves automatically generated should be updated first. In this example, edit depends on each of the eight object files; the object file main.o depends on the source file main.c and on the header file defs.h.

A shell command follows each line that contains a target and prerequisites. These shell commands say how to update the target file. A tab character must come at the beginning of every command line to distinguish commands lines from other lines in the makefile. (Bear in mind that make does not know anything about how the commands work. It is up to you to supply commands that will update the target file properly. All make does is execute the commands in the rule you have specified when the target file needs to be updated.)

The target clean is not a file, but merely the name of an action. Since you normally do not want to carry out the actions in this rule, clean is not a prerequisite of any other rule. Consequently, make never does anything with it unless you tell it specifically. Note that this rule not only is not a prerequisite, it also does not have any prerequisites, so the only purpose of the rule is to run the specified commands. Targets that do not refer to files but are just actions are called phony targets. See section Phony Targets, for information about this kind of target. See section Errors in Commands, to see how to cause make to ignore errors from rm or any other command.

How make Processes a Makefile

 

By default, make starts with the first target (not targets whose names start with .). This is called the default goal. (Goals are the targets that make strives ultimately to update. See section Arguments to Specify the Goals.)

In the simple example of the previous section, the default goal is to update the executable program edit; therefore, we put that rule first.

Thus, when you give the command:

make

make reads the makefile in the current directory and begins by processing the first rule. In the example, this rule is for relinking edit; but before make can fully process this rule, it must process the rules for the files that edit depends on, which in this case are the object files. Each of these files is processed according to its own rule. These rules say to update each .o file by compiling its source file. The recompilation must be done if the source file, or any of the header files named as prerequisites, is more recent than the object file, or if the object file does not exist.

The other rules are processed because their targets appear as prerequisites of the goal. If some other rule is not depended on by the goal (or anything it depends on, etc.), that rule is not processed, unless you tell make to do so (with a command such as make clean).

Before recompiling an object file, make considers updating its prerequisites, the source file and header files. This makefile does not specify anything to be done for them–the .c and .h files are not the targets of any rules–so make does nothing for these files. But make would update automatically generated C programs, such as those made by Bison or Yacc, by their own rules at this time.

After recompiling whichever object files need it, make decides whether to relink edit. This must be done if the file edit does not exist, or if any of the object files are newer than it. If an object file was just recompiled, it is now newer than edit, so edit is relinked.

Thus, if we change the file insert.c and run make, make will compile that file to update insert.o, and then link edit. If we change the file command.h and run make, make will recompile the object files kbd.o, command.o and files.o and then link the file edit.

Variables Make Makefiles Simpler

 

In our example, we had to list all the object files twice in the rule for edit (repeated here):

edit : main.o kbd.o command.o display.o \ insert.o search.o files.o utils.o cc -o edit main.o kbd.o command.o display.o \ insert.o search.o files.o utils.o

Such duplication is error-prone; if a new object file is added to the system, we might add it to one list and forget the other. We can eliminate the risk and simplify the makefile by using a variable. Variables allow a text string to be defined once and substituted in multiple places later (see section How to Use Variables).

It is standard practice for every makefile to have a variable named objects, OBJECTS, objs, OBJS, obj, or OBJ which is a list of all object file names. We would define such a variable objects with a line like this in the makefile:

objects = main.o kbd.o command.o display.o \ insert.o search.o files.o utils.o

Then, each place we want to put a list of the object file names, we can substitute the variable’s value by writing $(objects) (see section How to Use Variables).

Here is how the complete simple makefile looks when you use a variable for the object files:

objects = main.o kbd.o command.o display.o \ insert.o search.o files.o utils.oedit : $(objects) cc -o edit $(objects)main.o : main.c defs.h cc -c main.c kbd.o : kbd.c defs.h command.h cc -c kbd.c command.o : command.c defs.h command.h cc -c command.c display.o : display.c defs.h buffer.h cc -c display.c insert.o : insert.c defs.h buffer.h cc -c insert.c search.o : search.c defs.h buffer.h cc -c search.c files.o : files.c defs.h buffer.h command.h cc -c files.c utils.o : utils.c defs.h cc -c utils.cclean : rm edit $(objects)

Letting make Deduce the Commands

 

It is not necessary to spell out the commands for compiling the individual C source files, because make can figure them out: it has an implicit rule for updating a .o file from a correspondingly named .c file using a cc -c command. For example, it will use the command cc -c main.c -o main.o to compile main.c into main.o. We can therefore omit the commands from the rules for the object files. See section Using Implicit Rules.

When a .c file is used automatically in this way, it is also automatically added to the list of prerequisites. We can therefore omit the .c files from the prerequisites, provided we omit the commands.

Here is the entire example, with both of these changes, and a variable objects as suggested above:

objects = main.o kbd.o command.o display.o \ insert.o search.o files.o utils.oedit : $(objects) cc -o edit $(objects)main.o : defs.hkbd.o : defs.h command.h command.o : defs.h command.hdisplay.o : defs.h buffer.h insert.o : defs.h buffer.hsearch.o : defs.h buffer.h files.o : defs.h buffer.h command.hutils.o : defs.h.PHONY : cleanclean : -rm edit $(objects)

This is how we would write the makefile in actual practice. (The complications associated with clean are described elsewhere. See section Phony Targets, and section Errors in Commands.)

Because implicit rules are so convenient, they are important. You will see them used frequently.

Another Style of Makefile

 

When the objects of a makefile are created only by implicit rules, an alternative style of makefile is possible. In this style of makefile, you group entries by their prerequisites instead of by their targets. Here is what one looks like:

objects = main.o kbd.o command.o display.o \ insert.o search.o files.o utils.oedit : $(objects) cc -o edit $(objects)$(objects) : defs.h kbd.o command.o files.o : command.h display.o insert.o search.o files.o : buffer.h

Here defs.h is given as a prerequisite of all the object files; command.h and buffer.h are prerequisites of the specific object files listed for them.

Whether this is better is a matter of taste: it is more compact, but some people dislike it because they find it clearer to put all the information about each target in one place.

Rules for Cleaning the Directory

 

Compiling a program is not the only thing you might want to write rules for. Makefiles commonly tell how to do a few other things besides compiling a program: for example, how to delete all the object files and executables so that the directory is clean.

Here is how we could write a make rule for cleaning our example editor:

clean: rm edit $(objects)

In practice, we might want to write the rule in a somewhat more complicated manner to handle unanticipated situations. We would do this:

.PHONY : cleanclean : -rm edit $(objects)

This prevents make from getting confused by an actual file called clean and causes it to continue in spite of errors from rm. (See section Phony Targets, and section Errors in Commands.)

A rule such as this should not be placed at the beginning of the makefile, because we do not want it to run by default! Thus, in the example makefile, we want the rule for edit, which recompiles the editor, to remain the default goal.

Since clean is not a prerequisite of edit, this rule will not run at all if we give the command make with no arguments. In order to make the rule run, we have to type make clean. See section How to Run make.

Writing Makefiles

The information that tells make how to recompile a system comes from reading a data base called the makefile.

What Makefiles Contain

Makefiles contain five kinds of things: explicit rules, implicit rules, variable definitions, directives, and comments. Rules, variables, and directives are described at length in later chapters.

  • An explicit rule says when and how to remake one or more files, called the rule’s targets. It lists the other files that the targets depend on, call the prerequisites of the target, and may also give commands to use to create or update the targets. See section Writing Rules.

  • An implicit rule says when and how to remake a class of files based on their names. It describes how a target may depend on a file with a name similar to the target and gives commands to create or update such a target. See section Using Implicit Rules.

  • A variable definition is a line that specifies a text string value for a variable that can be substituted into the text later. The simple makefile example shows a variable definition for objects as a list of all object files (see section Variables Make Makefiles Simpler).

  • A directive is a command for make to do something special while reading the makefile. These include:

  • # in a line of a makefile starts a comment. It and the rest of the line are ignored, except that a trailing backslash not escaped by another backslash will continue the comment across multiple lines. Comments may appear on any of the lines in the makefile, except within a define directive, and perhaps within commands (where the shell decides what is a comment). A line containing just a comment (with perhaps spaces before it) is effectively blank, and is ignored.

What Name to Give Your Makefile

 

By default, when make looks for the makefile, it tries the following names, in order: GNUmakefile, makefile and Makefile.

Normally you should call your makefile either makefile or Makefile. (We recommend Makefile because it appears prominently near the beginning of a directory listing, right near other important files such as README.) The first name checked, GNUmakefile, is not recommended for most makefiles. You should use this name if you have a makefile that is specific to GNU make, and will not be understood by other versions of make. Other make programs look for makefile and Makefile, but not GNUmakefile.

If make finds none of these names, it does not use any makefile. Then you must specify a goal with a command argument, and make will attempt to figure out how to remake it using only its built-in implicit rules. See section Using Implicit Rules.

If you want to use a nonstandard name for your makefile, you can specify the makefile name with the -f or --file option. The arguments -f name or --file=name tell make to read the file name as the makefile. If you use more than one -f or --file option, you can specify several makefiles. All the makefiles are effectively concatenated in the order specified. The default makefile names GNUmakefile, makefile and Makefile are not checked automatically if you specify -f or --file.

Including Other Makefiles

 

The include directive tells make to suspend reading the current makefile and read one or more other makefiles before continuing. The directive is a line in the makefile that looks like this:

include filenames…

filenames can contain shell file name patterns.

Extra spaces are allowed and ignored at the beginning of the line, but a tab is not allowed. (If the line begins with a tab, it will be considered a command line.) Whitespace is required between include and the file names, and between file names; extra whitespace is ignored there and at the end of the directive. A comment starting with # is allowed at the end of the line. If the file names contain any variable or function references, they are expanded. See section How to Use Variables.

For example, if you have three .mk files, a.mk, b.mk, and c.mk, and $(bar) expands to bish bash, then the following expression

include foo *.mk $(bar)

is equivalent to

include foo a.mk b.mk c.mk bish bash

When make processes an include directive, it suspends reading of the containing makefile and reads from each listed file in turn. When that is finished, make resumes reading the makefile in which the directive appears.

One occasion for using include directives is when several programs, handled by individual makefiles in various directories, need to use a common set of variable definitions (see section Setting Variables) or pattern rules (see section Defining and Redefining Pattern Rules).

Another such occasion is when you want to generate prerequisites from source files automatically; the prerequisites can be put in a file that is included by the main makefile. This practice is generally cleaner than that of somehow appending the prerequisites to the end of the main makefile as has been traditionally done with other versions of make. See section Generating Prerequisites Automatically.

If the specified name does not start with a slash, and the file is not found in the current directory, several other directories are searched. First, any directories you have specified with the -I or --include-dir option are searched (see section Summary of Options). Then the following directories (if they exist) are searched, in this order: prefix/include (normally /usr/local/include (1)) /usr/gnu/include, /usr/local/include, /usr/include.

If an included makefile cannot be found in any of these directories, a warning message is generated, but it is not an immediately fatal error; processing of the makefile containing the include continues. Once it has finished reading makefiles, make will try to remake any that are out of date or don’t exist. See section How Makefiles Are Remade. Only after it has tried to find a way to remake a makefile and failed, will make diagnose the missing makefile as a fatal error.

If you want make to simply ignore a makefile which does not exist and cannot be remade, with no error message, use the -include directive instead of include, like this:

-include filenames…

This is acts like include in every way except that there is no error (not even a warning) if any of the filenames do not exist. For compatibility with some other make implementations, sinclude is another name for -include.

The Variable MAKEFILES

 

If the environment variable MAKEFILES is defined, make considers its value as a list of names (separated by whitespace) of additional makefiles to be read before the others. This works much like the include directive: various directories are searched for those files (see section Including Other Makefiles). In addition, the default goal is never taken from one of these makefiles and it is not an error if the files listed in MAKEFILES are not found.

The main use of MAKEFILES is in communication between recursive invocations of make (see section Recursive Use of make). It usually is not desirable to set the environment variable before a top-level invocation of make, because it is usually better not to mess with a makefile from outside. However, if you are running make without a specific makefile, a makefile in MAKEFILES can do useful things to help the built-in implicit rules work better, such as defining search paths (see section Searching Directories for Prerequisites).

Some users are tempted to set MAKEFILES in the environment automatically on login, and program makefiles to expect this to be done. This is a very bad idea, because such makefiles will fail to work if run by anyone else. It is much better to write explicit include directives in the makefiles. See section Including Other Makefiles.

How Makefiles Are Remade

Sometimes makefiles can be remade from other files, such as RCS or SCCS files. If a makefile can be remade from other files, you probably want make to get an up-to-date version of the makefile to read in.

To this end, after reading in all makefiles, make will consider each as a goal target and attempt to update it. If a makefile has a rule which says how to update it (found either in that very makefile or in another one) or if an implicit rule applies to it (see section Using Implicit Rules), it will be updated if necessary. After all makefiles have been checked, if any have actually been changed, make starts with a clean slate and reads all the makefiles over again. (It will also attempt to update each of them over again, but normally this will not change them again, since they are already up to date.)

If you know that one or more of your makefiles cannot be remade and you want to keep make from performing an implicit rule search on them, perhaps for efficiency reasons, you can use any normal method of preventing implicit rule lookup to do so. For example, you can write an explicit rule with the makefile as the target, and an empty command string (see section Using Empty Commands).

If the makefiles specify a double-colon rule to remake a file with commands but no prerequisites, that file will always be remade (see section Double-Colon Rules). In the case of makefiles, a makefile that has a double-colon rule with commands but no prerequisites will be remade every time make is run, and then again after make starts over and reads the makefiles in again. This would cause an infinite loop: make would constantly remake the makefile, and never do anything else. So, to avoid this, make will not attempt to remake makefiles which are specified as targets of a double-colon rule with commands but no prerequisites.

If you do not specify any makefiles to be read with -f or --file options, make will try the default makefile names; see section What Name to Give Your Makefile. Unlike makefiles explicitly requested with -f or --file options, make is not certain that these makefiles should exist. However, if a default makefile does not exist but can be created by running make rules, you probably want the rules to be run so that the makefile can be used.

Therefore, if none of the default makefiles exists, make will try to make each of them in the same order in which they are searched for (see section What Name to Give Your Makefile) until it succeeds in making one, or it runs out of names to try. Note that it is not an error if make cannot find or make any makefile; a makefile is not always necessary.

When you use the -t or --touch option (see section Instead of Executing the Commands), you would not want to use an out-of-date makefile to decide which targets to touch. So the -t option has no effect on updating makefiles; they are really updated even if -t is specified. Likewise, -q (or --question) and -n (or --just-print) do not prevent updating of makefiles, because an out-of-date makefile would result in the wrong output for other targets. Thus, make -f mfile -n foo will update mfile, read it in, and then print the commands to update foo and its prerequisites without running them. The commands printed for foo will be those specified in the updated contents of mfile.

However, on occasion you might actually wish to prevent updating of even the makefiles. You can do this by specifying the makefiles as goals in the command line as well as specifying them as makefiles. When the makefile name is specified explicitly as a goal, the options -t and so on do apply to them.

Thus, make -f mfile -n mfile foo would read the makefile mfile, print the commands needed to update it without actually running them, and then print the commands needed to update foo without running them. The commands for foo will be those specified by the existing contents of mfile.

Overriding Part of Another Makefile

Sometimes it is useful to have a makefile that is mostly just like another makefile. You can often use the include directive to include one in the other, and add more targets or variable definitions. However, if the two makefiles give different commands for the same target, make will not let you just do this. But there is another way.

In the containing makefile (the one that wants to include the other), you can use a match-anything pattern rule to say that to remake any target that cannot be made from the information in the containing makefile, make should look in another makefile. See section Defining and Redefining Pattern Rules, for more information on pattern rules.

For example, if you have a makefile called Makefile that says how to make the target foo (and other targets), you can write a makefile called GNUmakefile that contains:

foo: frobnicate > foo%: force @$(MAKE) -f Makefile $@force: ;

If you say make foo, make will find GNUmakefile, read it, and see that to make foo, it needs to run the command frobnicate > foo. If you say make bar, make will find no way to make bar in GNUmakefile, so it will use the commands from the pattern rule: make -f Makefile bar. If Makefile provides a rule for updating bar, make will apply the rule. And likewise for any other target that GNUmakefile does not say how to make.

The way this works is that the pattern rule has a pattern of just %, so it matches any target whatever. The rule specifies a prerequisite force, to guarantee that the commands will be run even if the target file already exists. We give force target empty commands to prevent make from searching for an implicit rule to build it–otherwise it would apply the same match-anything rule to force itself and create a prerequisite loop!

How make Reads a Makefile

 

GNU make does its work in two distinct phases. During the first phase it reads all the makefiles, included makefiles, etc. and internalizes all the variables and their values, implicit and explicit rules, and constructs a dependency graph of all the targets and their prerequisites. During the second phase, make uses these internal structures to determine what targets will need to be rebuilt and to invoke the rules necessary to do so.

It’s important to understand this two-phase approach because it has a direct impact on how variable and function expansion happens; this is often a source of some confusion when writing makefiles. Here we will present a summary of the phases in which expansion happens for different constructs within the makefile. We say that expansion is immediate if it happens during the first phase: in this case make will expand any variables or functions in that section of a construct as the makefile is parsed. We say that expansion is deferred if expansion is not performed immediately. Expansion of deferred construct is not performed until either the construct appears later in an immediate context, or until the second phase.

You may not be familiar with some of these constructs yet. You can reference this section as you become familiar with them, in later chapters.

Variable Assignment

 

Variable definitions are parsed as follows:

immediate = deferred immediate ?= deferred immediate := immediate immediate += deferred or immediate define immediate deferredendef

For the append operator, +=, the right-hand side is considered immediate if the variable was previously set as a simple variable (:=), and deferred otherwise.

Conditional Syntax

 

All instances of conditional syntax are parsed immediately, in their entirety; this includes the ifdef, ifeq, ifndef, and ifneq forms.

Rule Definition

 

A rule is always expanded the same way, regardless of the form:

immediate : immediate ; deferred deferred

That is, the target and prerequisite sections are expanded immediately, and the commands used to construct the target are always deferred. This general rule is true for explicit rules, pattern rules, suffix rules, static pattern rules, and simple prerequisite definitions.

Writing Rules

 

A rule appears in the makefile and says when and how to remake certain files, called the rule’s targets (most often only one per rule). It lists the other files that are the prerequisites of the target, and commands to use to create or update the target.

The order of rules is not significant, except for determining the default goal: the target for make to consider, if you do not otherwise specify one. The default goal is the target of the first rule in the first makefile. If the first rule has multiple targets, only the first target is taken as the default. There are two exceptions: a target starting with a period is not a default unless it contains one or more slashes, /, as well; and, a target that defines a pattern rule has no effect on the default goal. (See section Defining and Redefining Pattern Rules.)

Therefore, we usually write the makefile so that the first rule is the one for compiling the entire program or all the programs described by the makefile (often with a target called all). See section Arguments to Specify the Goals.

Rule Syntax

In general, a rule looks like this:

targets : prerequisites command …

or like this:

targets : prerequisites ; command command …

The targets are file names, separated by spaces. Wildcard characters may be used (see section Using Wildcard Characters in File Names) and a name of the form a(m) represents member m in archive file a (see section Archive Members as Targets). Usually there is only one target per rule, but occasionally there is a reason to have more (see section Multiple Targets in a Rule).

The command lines start with a tab character. The first command may appear on the line after the prerequisites, with a tab character, or may appear on the same line, with a semicolon. Either way, the effect is the same. See section Writing the Commands in Rules.

Because dollar signs are used to start variable references, if you really want a dollar sign in a rule you must write two of them, $ (see section How to Use Variables). You may split a long line by inserting a backslash followed by a newline, but this is not required, as make places no limit on the length of a line in a makefile.

A rule tells make two things: when the targets are out of date, and how to update them when necessary.

The criterion for being out of date is specified in terms of the prerequisites, which consist of file names separated by spaces. (Wildcards and archive members (see section Using make to Update Archive Files) are allowed here too.) A target is out of date if it does not exist or if it is older than any of the prerequisites (by comparison of last-modification times). The idea is that the contents of the target file are computed based on information in the prerequisites, so if any of the prerequisites changes, the contents of the existing target file are no longer necessarily valid.

How to update is specified by commands. These are lines to be executed by the shell (normally sh), but with some extra features (see section Writing the Commands in Rules).

Using Wildcard Characters in File Names

 

A single file name can specify many files using wildcard characters. The wildcard characters in make are \*, ? and \[...\], the same as in the Bourne shell. For example, \*.c specifies a list of all the files (in the working directory) whose names end in .c.

The character ~ at the beginning of a file name also has special significance. If alone, or followed by a slash, it represents your home directory. For example ~/bin expands to /home/you/bin. If the ~ is followed by a word, the string represents the home directory of the user named by that word. For example ~john/bin expands to /home/john/bin. On systems which don’t have a home directory for each user (such as MS-DOS or MS-Windows), this functionality can be simulated by setting the environment variable HOME.

Wildcard expansion happens automatically in targets, in prerequisites, and in commands (where the shell does the expansion). In other contexts, wildcard expansion happens only if you request it explicitly with the wildcard function.

The special significance of a wildcard character can be turned off by preceding it with a backslash. Thus, foo\\\*bar would refer to a specific file whose name consists of foo, an asterisk, and bar.

Wildcard Examples

Wildcards can be used in the commands of a rule, where they are expanded by the shell. For example, here is a rule to delete all the object files:

clean: rm -f *.o

 

Wildcards are also useful in the prerequisites of a rule. With the following rule in the makefile, make print will print all the .c files that have changed since the last time you printed them:

print: *.c lpr -p $? touch print

This rule uses print as an empty target file; see section Empty Target Files to Record Events. (The automatic variable $? is used to print only those files that have changed; see section Automatic Variables.)

Wildcard expansion does not happen when you define a variable. Thus, if you write this:

objects = *.o

then the value of the variable objects is the actual string \*.o. However, if you use the value of objects in a target, prerequisite or command, wildcard expansion will take place at that time. To set objects to the expansion, instead use:

objects := $(wildcard *.o)

See section The Function wildcard.

Pitfalls of Using Wildcards

 

Now here is an example of a naive way of using wildcard expansion, that does not do what you would intend. Suppose you would like to say that the executable file foo is made from all the object files in the directory, and you write this:

objects = *.ofoo : $(objects) cc -o foo $(CFLAGS) $(objects)

The value of objects is the actual string \*.o. Wildcard expansion happens in the rule for foo, so that each existing .o file becomes a prerequisite of foo and will be recompiled if necessary.

But what if you delete all the .o files? When a wildcard matches no files, it is left as it is, so then foo will depend on the oddly-named file \*.o. Since no such file is likely to exist, make will give you an error saying it cannot figure out how to make \*.o. This is not what you want!

Actually it is possible to obtain the desired result with wildcard expansion, but you need more sophisticated techniques, including the wildcard function and string substitution. These are described in the following section.

 

Microsoft operating systems (MS-DOS and MS-Windows) use backslashes to separate directories in pathnames, like so:

c:\foo\bar\baz.c

This is equivalent to the Unix-style c:/foo/bar/baz.c (the c: part is the so-called drive letter). When make runs on these systems, it supports backslashes as well as the Unix-style forward slashes in pathnames. However, this support does not include the wildcard expansion, where backslash is a quote character. Therefore, you must use Unix-style slashes in these cases.

The Function wildcard

 

Wildcard expansion happens automatically in rules. But wildcard expansion does not normally take place when a variable is set, or inside the arguments of a function. If you want to do wildcard expansion in such places, you need to use the wildcard function, like this:

$(wildcard pattern…)

This string, used anywhere in a makefile, is replaced by a space-separated list of names of existing files that match one of the given file name patterns. If no existing file name matches a pattern, then that pattern is omitted from the output of the wildcard function. Note that this is different from how unmatched wildcards behave in rules, where they are used verbatim rather than ignored (see section Pitfalls of Using Wildcards).

One use of the wildcard function is to get a list of all the C source files in a directory, like this:

$(wildcard *.c)

We can change the list of C source files into a list of object files by replacing the .c suffix with .o in the result, like this:

$(patsubst %.c,%.o,$(wildcard *.c))

(Here we have used another function, patsubst. See section Functions for String Substitution and Analysis.)

Thus, a makefile to compile all C source files in the directory and then link them together could be written as follows:

objects := $(patsubst %.c,%.o,$(wildcard *.c))foo : $(objects) cc -o foo $(objects)

(This takes advantage of the implicit rule for compiling C programs, so there is no need to write explicit rules for compiling the files. See section The Two Flavors of Variables, for an explanation of :=, which is a variant of =.)

Searching Directories for Prerequisites

 

For large systems, it is often desirable to put sources in a separate directory from the binaries. The directory search features of make facilitate this by searching several directories automatically to find a prerequisite. When you redistribute the files among directories, you do not need to change the individual rules, just the search paths.

VPATH: Search Path for All Prerequisites

 

The value of the make variable VPATH specifies a list of directories that make should search. Most often, the directories are expected to contain prerequisite files that are not in the current directory; however, VPATH specifies a search list that make applies for all files, including files which are targets of rules.

Thus, if a file that is listed as a target or prerequisite does not exist in the current directory, make searches the directories listed in VPATH for a file with that name. If a file is found in one of them, that file may become the prerequisite (see below). Rules may then specify the names of files in the prerequisite list as if they all existed in the current directory. See section Writing Shell Commands with Directory Search.

In the VPATH variable, directory names are separated by colons or blanks. The order in which directories are listed is the order followed by make in its search. (On MS-DOS and MS-Windows, semi-colons are used as separators of directory names in VPATH, since the colon can be used in the pathname itself, after the drive letter.)

For example,

VPATH = src:../headers

specifies a path containing two directories, src and ../headers, which make searches in that order.

With this value of VPATH, the following rule,

foo.o : foo.c

is interpreted as if it were written like this:

foo.o : src/foo.c

assuming the file foo.c does not exist in the current directory but is found in the directory src.

The vpath Directive

 

Similar to the VPATH variable, but more selective, is the vpath directive (note lower case), which allows you to specify a search path for a particular class of file names: those that match a particular pattern. Thus you can supply certain search directories for one class of file names and other directories (or none) for other file names.

There are three forms of the vpath directive:

vpath pattern directories

Specify the search path directories for file names that match pattern. The search path, directories, is a list of directories to be searched, separated by colons (semi-colons on MS-DOS and MS-Windows) or blanks, just like the search path used in the VPATH variable.

vpath pattern

Clear out the search path associated with pattern.

vpath

Clear all search paths previously specified with vpath directives.

A vpath pattern is a string containing a % character. The string must match the file name of a prerequisite that is being searched for, the % character matching any sequence of zero or more characters (as in pattern rules; see section Defining and Redefining Pattern Rules). For example, %.h matches files that end in .h. (If there is no %, the pattern must match the prerequisite exactly, which is not useful very often.)

% characters in a vpath directive’s pattern can be quoted with preceding backslashes (\&grave;). Backslashes that would otherwise quote %characters can be quoted with more backslashes. Backslashes that quote%characters or other backslashes are removed from the pattern before it is compared to file names. Backslashes that are not in danger of quoting%` characters go unmolested.

When a prerequisite fails to exist in the current directory, if the pattern in a vpath directive matches the name of the prerequisite file, then the directories in that directive are searched just like (and before) the directories in the VPATH variable.

For example,

vpath %.h ../headers

tells make to look for any prerequisite whose name ends in .h in the directory ../headers if the file is not found in the current directory.

If several vpath patterns match the prerequisite file’s name, then make processes each matching vpath directive one by one, searching all the directories mentioned in each directive. make handles multiple vpath directives in the order in which they appear in the makefile; multiple directives with the same pattern are independent of each other.

Thus,

vpath %.c foovpath % blishvpath %.c bar

will look for a file ending in .c in foo, then blish, then bar, while

vpath %.c foo:barvpath % blish

will look for a file ending in .c in foo, then bar, then blish.

How Directory Searches are Performed

 

When a prerequisite is found through directory search, regardless of type (general or selective), the pathname located may not be the one that make actually provides you in the prerequisite list. Sometimes the path discovered through directory search is thrown away.

The algorithm make uses to decide whether to keep or abandon a path found via directory search is as follows:

  1. If a target file does not exist at the path specified in the makefile, directory search is performed.
  2. If the directory search is successful, that path is kept and this file is tentatively stored as the target.
  3. All prerequisites of this target are examined using this same method.
  4. After processing the prerequisites, the target may or may not need to be rebuilt:
    1. If the target does not need to be rebuilt, the path to the file found during directory search is used for any prerequisite lists which contain this target. In short, if make doesn’t need to rebuild the target then you use the path found via directory search.
    2. If the target does need to be rebuilt (is out-of-date), the pathname found during directory search is thrown away, and the target is rebuilt using the file name specified in the makefile. In short, if make must rebuild, then the target is rebuilt locally, not in the directory found via directory search.

This algorithm may seem complex, but in practice it is quite often exactly what you want.

Other versions of make use a simpler algorithm: if the file does not exist, and it is found via directory search, then that pathname is always used whether or not the target needs to be built. Thus, if the target is rebuilt it is created at the pathname discovered during directory search.

If, in fact, this is the behavior you want for some or all of your directories, you can use the GPATH variable to indicate this to make.

GPATH has the same syntax and format as VPATH (that is, a space- or colon-delimited list of pathnames). If an out-of-date target is found by directory search in a directory that also appears in GPATH, then that pathname is not thrown away. The target is rebuilt using the expanded path.

Writing Shell Commands with Directory Search

 

When a prerequisite is found in another directory through directory search, this cannot change the commands of the rule; they will execute as written. Therefore, you must write the commands with care so that they will look for the prerequisite in the directory where make finds it.

This is done with the automatic variables such as $^ (see section Automatic Variables). For instance, the value of $^ is a list of all the prerequisites of the rule, including the names of the directories in which they were found, and the value of $@ is the target. Thus:

foo.o : foo.c cc -c $(CFLAGS) $^ -o $@

(The variable CFLAGS exists so you can specify flags for C compilation by implicit rules; we use it here for consistency so it will affect all C compilations uniformly; see section Variables Used by Implicit Rules.)

Often the prerequisites include header files as well, which you do not want to mention in the commands. The automatic variable $\< is just the first prerequisite:

VPATH = src:../headersfoo.o : foo.c defs.h hack.h cc -c $(CFLAGS) $< -o $@

Directory Search and Implicit Rules

 

The search through the directories specified in VPATH or with vpath also happens during consideration of implicit rules (see section Using Implicit Rules).

For example, when a file foo.o has no explicit rule, make considers implicit rules, such as the built-in rule to compile foo.c if that file exists. If such a file is lacking in the current directory, the appropriate directories are searched for it. If foo.c exists (or is mentioned in the makefile) in any of the directories, the implicit rule for C compilation is applied.

The commands of implicit rules normally use automatic variables as a matter of necessity; consequently they will use the file names found by directory search with no extra effort.

Directory Search for Link Libraries

 

Directory search applies in a special way to libraries used with the linker. This special feature comes into play when you write a prerequisite whose name is of the form -lname. (You can tell something strange is going on here because the prerequisite is normally the name of a file, and the file name of a library generally looks like libname.a, not like -lname.)

When a prerequisite’s name has the form -lname, make handles it specially by searching for the file libname.so in the current directory, in directories specified by matching vpath search paths and the VPATH search path, and then in the directories /lib, /usr/lib, and prefix/lib (normally /usr/local/lib, but MS-DOS/MS-Windows versions of make behave as if prefix is defined to be the root of the DJGPP installation tree).

If that file is not found, then the file libname.a is searched for, in the same directories as above.

For example, if there is a /usr/lib/libcurses.a library on your system (and no /usr/lib/libcurses.so file), then

foo : foo.c -lcurses cc $^ -o $@

would cause the command cc foo.c /usr/lib/libcurses.a -o foo to be executed when foo is older than foo.c or than /usr/lib/libcurses.a.

Although the default set of files to be searched for is libname.so and libname.a, this is customizable via the .LIBPATTERNS variable. Each word in the value of this variable is a pattern string. When a prerequisite like -lname is seen, make will replace the percent in each pattern in the list with name and perform the above directory searches using that library filename. If no library is found, the next word in the list will be used.

The default value for .LIBPATTERNS is “lib%.so lib%.a”, which provides the default behavior described above.

You can turn off link library expansion completely by setting this variable to an empty value.

Phony Targets

 

A phony target is one that is not really the name of a file. It is just a name for some commands to be executed when you make an explicit request. There are two reasons to use a phony target: to avoid a conflict with a file of the same name, and to improve performance.

If you write a rule whose commands will not create the target file, the commands will be executed every time the target comes up for remaking. Here is an example:

clean: rm *.o temp

Because the rm command does not create a file named clean, probably no such file will ever exist. Therefore, the rm command will be executed every time you say make clean.

The phony target will cease to work if anything ever does create a file named clean in this directory. Since it has no prerequisites, the file clean would inevitably be considered up to date, and its commands would not be executed. To avoid this problem, you can explicitly declare the target to be phony, using the special target .PHONY (see section Special Built-in Target Names) as follows:

.PHONY : clean

Once this is done, make clean will run the commands regardless of whether there is a file named clean.

Since it knows that phony targets do not name actual files that could be remade from other files, make skips the implicit rule search for phony targets (see section Using Implicit Rules). This is why declaring a target phony is good for performance, even if you are not worried about the actual file existing.

Thus, you first write the line that states that clean is a phony target, then you write the rule, like this:

.PHONY: clean clean: rm *.o temp

Another example of the usefulness of phony targets is in conjunction with recursive invocations of make. In this case the makefile will often contain a variable which lists a number of subdirectories to be built. One way to handle this is with one rule whose command is a shell loop over the subdirectories, like this:

SUBDIRS = foo bar baz subdirs: for dir in $(SUBDIRS); do \ $(MAKE) -C $$dir; \ done

There are a few of problems with this method, however. First, any error detected in a submake is not noted by this rule, so it will continue to build the rest of the directories even when one fails. This can be overcome by adding shell commands to note the error and exit, but then it will do so even if make is invoked with the -k option, which is unfortunate. Second, and perhaps more importantly, you cannot take advantage of the parallel build capabilities of make using this method, since there is only one rule.

By declaring the subdirectories as phony targets (you must do this as the subdirectory obviously always exists; otherwise it won’t be built) you can remove these problems:

SUBDIRS = foo bar baz .PHONY: subdirs $(SUBDIRS) subdirs: $(SUBDIRS) $(SUBDIRS): $(MAKE) -C $ foo: baz

Here we’ve also declared that the foo subdirectory cannot be built until after the baz subdirectory is complete; this kind of relationship declaration is particularly important when attempting parallel builds.

A phony target should not be a prerequisite of a real target file; if it is, its commands are run every time make goes to update that file. As long as a phony target is never a prerequisite of a real target, the phony target commands will be executed only when the phony target is a specified goal (see section Arguments to Specify the Goals).

Phony targets can have prerequisites. When one directory contains multiple programs, it is most convenient to describe all of the programs in one makefile ./Makefile. Since the target remade by default will be the first one in the makefile, it is common to make this a phony target named all and give it, as prerequisites, all the individual programs. For example:

all : prog1 prog2 prog3 .PHONY : all prog1 : prog1.o utils.o cc -o prog1 prog1.o utils.o prog2 : prog2.o cc -o prog2 prog2.o prog3 : prog3.o sort.o utils.o cc -o prog3 prog3.o sort.o utils.o

Now you can say just make to remake all three programs, or specify as arguments the ones to remake (as in make prog1 prog3).

When one phony target is a prerequisite of another, it serves as a subroutine of the other. For example, here make cleanall will delete the object files, the difference files, and the file program:

.PHONY: cleanall cleanobj cleandiff cleanall : cleanobj cleandiff rm program cleanobj : rm *.o cleandiff : rm *.diff

Rules without Commands or Prerequisites


If a rule has no prerequisites or commands, and the target of the rule is a nonexistent file, then make imagines this target to have been updated whenever its rule is run. This implies that all targets depending on this one will always have their commands run.
An example will illustrate this:

clean: FORCE rm $(objects) FORCE:

Here the target FORCE satisfies the special conditions, so the target clean that depends on it is forced to run its commands. There is nothing special about the name FORCE, but that is one name commonly used this way.
As you can see, using FORCE this way has the same results as using .PHONY: clean.
Using .PHONY is more explicit and more efficient. However, other versions of make do not support .PHONY; thus FORCE appears in many makefiles. See section Phony Targets.

Empty Target Files to Record Events


The empty target is a variant of the phony target; it is used to hold commands for an action that you request explicitly from time to time. Unlike a phony target, this target file can really exist; but the file’s contents do not matter, and usually are empty.
The purpose of the empty target file is to record, with its last-modification time, when the rule’s commands were last executed. It does so because one of the commands is a touch command to update the target file.
The empty target file should have some prerequisites (otherwise it doesn’t make sense). When you ask to remake the empty target, the commands are executed if any prerequisite is more recent than the target; in other words, if a prerequisite has changed since the last time you remade the target. Here is an example:

print: foo.c bar.c lpr -p $? touch print


With this rule, make print will execute the lpr command if either source file has changed since the last make print. The automatic variable $? is used to print only those files that have changed (see section Automatic Variables).

Special Built-in Target Names


Certain names have special meanings if they appear as targets.

.PHONY

The prerequisites of the special target .PHONY are considered to be phony targets. When it is time to consider such a target, make will run its commands unconditionally, regardless of whether a file with that name exists or what its last-modification time is. See section Phony Targets.

.SUFFIXES

The prerequisites of the special target .SUFFIXES are the list of suffixes to be used in checking for suffix rules. See section Old-Fashioned Suffix Rules.

.DEFAULT

The commands specified for .DEFAULT are used for any target for which no rules are found (either explicit rules or implicit rules). See section Defining Last-Resort Default Rules. If .DEFAULT commands are specified, every file mentioned as a prerequisite, but not as a target in a rule, will have these commands executed on its behalf. See section Implicit Rule Search Algorithm.

.PRECIOUS

The targets which .PRECIOUS depends on are given the following special treatment: if make is killed or interrupted during the execution of their commands, the target is not deleted. See section Interrupting or Killing make. Also, if the target is an intermediate file, it will not be deleted after it is no longer needed, as is normally done. See section Chains of Implicit Rules. In this latter respect it overlaps with the .SECONDARY special target. You can also list the target pattern of an implicit rule (such as %.o) as a prerequisite file of the special target .PRECIOUS to preserve intermediate files created by rules whose target patterns match that file’s name.

.INTERMEDIATE

The targets which .INTERMEDIATE depends on are treated as intermediate files. See section Chains of Implicit Rules. .INTERMEDIATE with no prerequisites has no effect.

.SECONDARY

The targets which .SECONDARY depends on are treated as intermediate files, except that they are never automatically deleted. See section Chains of Implicit Rules. .SECONDARY with no prerequisites causes all targets to be treated as secondary (i.e., no target is removed because it is considered intermediate).

.DELETE_ON_ERROR

If .DELETE_ON_ERROR is mentioned as a target anywhere in the makefile, then make will delete the target of a rule if it has changed and its commands exit with a nonzero exit status, just as it does when it receives a signal. See section Errors in Commands.

.IGNORE

If you specify prerequisites for .IGNORE, then make will ignore errors in execution of the commands run for those particular files. The commands for .IGNORE are not meaningful. If mentioned as a target with no prerequisites, .IGNORE says to ignore errors in execution of commands for all files. This usage of .IGNORE is supported only for historical compatibility. Since this affects every command in the makefile, it is not very useful; we recommend you use the more selective ways to ignore errors in specific commands. See section Errors in Commands.

.SILENT

If you specify prerequisites for .SILENT, then make will not print the commands to remake those particular files before executing them. The commands for .SILENT are not meaningful. If mentioned as a target with no prerequisites, .SILENT says not to print any commands before executing them. This usage of .SILENT is supported only for historical compatibility. We recommend you use the more selective ways to silence specific commands. See section Command Echoing. If you want to silence all commands for a particular run of make, use the -s or --silent option (see section Summary of Options).

.EXPORT_ALL_VARIABLES

Simply by being mentioned as a target, this tells make to export all variables to child processes by default. See section Communicating Variables to a Sub-make.

.NOTPARALLEL

If .NOTPARALLEL is mentioned as a target, then this invocation of make will be run serially, even if the -j option is given. Any recursively invoked make command will still be run in parallel (unless its makefile contains this target). Any prerequisites on this target are ignored.

Any defined implicit rule suffix also counts as a special target if it appears as a target, and so does the concatenation of two suffixes, such as .c.o. These targets are suffix rules, an obsolete way of defining implicit rules (but a way still widely used). In principle, any target name could be special in this way if you break it in two and add both pieces to the suffix list. In practice, suffixes normally begin with ., so these special target names also begin with .. See section Old-Fashioned Suffix Rules.

Multiple Targets in a Rule


A rule with multiple targets is equivalent to writing many rules, each with one target, and all identical aside from that. The same commands apply to all the targets, but their effects may vary because you can substitute the actual target name into the command using $@. The rule contributes the same prerequisites to all the targets also.
This is useful in two cases.

  • You want just prerequisites, no commands. For example:

kbd.o command.o files.o: command.h

gives an additional prerequisite to each of the three object files mentioned.

  • Similar commands work for all the targets. The commands do not need to be absolutely identical, since the automatic variable $@ can be used to substitute the particular target to be remade into the commands (see section Automatic Variables). For example:

bigoutput littleoutput : text.g generate text.g -$(subst output,,$@) > $@

is equivalent to

bigoutput : text.g generate text.g -big > bigoutput littleoutput : text.g generate text.g -little > littleoutput

Here we assume the hypothetical program generate makes two types of output, one if given -big and one if given -little. See section Functions for String Substitution and Analysis, for an explanation of the subst function.
Suppose you would like to vary the prerequisites according to the target, much as the variable $@ allows you to vary the commands. You cannot do this with multiple targets in an ordinary rule, but you can do it with a static pattern rule. See section Static Pattern Rules.

Multiple Rules for One Target

One file can be the target of several rules. All the prerequisites mentioned in all the rules are merged into one list of prerequisites for the target. If the target is older than any prerequisite from any rule, the commands are executed.
There can only be one set of commands to be executed for a file. If more than one rule gives commands for the same file, make uses the last set given and prints an error message. (As a special case, if the file’s name begins with a dot, no error message is printed. This odd behavior is only for compatibility with other implementations of make.) There is no reason to write your makefiles this way; that is why make gives you an error message.
An extra rule with just prerequisites can be used to give a few extra prerequisites to many files at once. For example, one usually has a variable named objects containing a list of all the compiler output files in the system being made. An easy way to say that all of them must be recompiled if config.h changes is to write the following:

objects = foo.o bar.o foo.o : defs.h bar.o : defs.h test.h $(objects) : config.h

This could be inserted or taken out without changing the rules that really specify how to make the object files, making it a convenient form to use if you wish to add the additional prerequisite intermittently.
Another wrinkle is that the additional prerequisites could be specified with a variable that you set with a command argument to make (see section Overriding Variables). For example,

extradeps= $(objects) : $(extradeps)

means that the command make extradeps=foo.h will consider foo.h as a prerequisite of each object file, but plain make will not.
If none of the explicit rules for a target has commands, then make searches for an applicable implicit rule to find some commands see section Using Implicit Rules).

Static Pattern Rules


Static pattern rules are rules which specify multiple targets and construct the prerequisite names for each target based on the target name. They are more general than ordinary rules with multiple targets because the targets do not have to have identical prerequisites. Their prerequisites must be analogous, but not necessarily identical.

Syntax of Static Pattern Rules


Here is the syntax of a static pattern rule:

targets …: target-pattern: dep-patterns … commands …

The targets list specifies the targets that the rule applies to. The targets can contain wildcard characters, just like the targets of ordinary rules (see section Using Wildcard Characters in File Names).
The target-pattern and dep-patterns say how to compute the prerequisites of each target. Each target is matched against the target-pattern to extract a part of the target name, called the stem. This stem is substituted into each of the dep-patterns to make the prerequisite names (one from each dep-pattern).
Each pattern normally contains the character % just once. When the target-pattern matches a target, the % can match any part of the target name; this part is called the stem. The rest of the pattern must match exactly. For example, the target foo.o matches the pattern %.o, with foo as the stem. The targets foo.c and foo.out do not match that pattern.
The prerequisite names for each target are made by substituting the stem for the % in each prerequisite pattern. For example, if one prerequisite pattern is %.c, then substitution of the stem foo gives the prerequisite name foo.c. It is legitimate to write a prerequisite pattern that does not contain %; then this prerequisite is the same for all targets.
% characters in pattern rules can be quoted with preceding backslashes (\&grave;). Backslashes that would otherwise quote %characters can be quoted with more backslashes. Backslashes that quote%characters or other backslashes are removed from the pattern before it is compared to file names or has a stem substituted into it. Backslashes that are not in danger of quoting%characters go unmolested. For example, the patternthe\%weird\\%pattern\&grave; has the%weird\&grave; preceding the operative %character, andpattern\&grave; following it. The final two backslashes are left alone because they cannot affect any % character.
Here is an example, which compiles each of foo.o and bar.o from the corresponding .c file:

objects = foo.o bar.o all: $(objects) $(objects): %.o: %.c $(CC) -c $(CFLAGS) $< -o $@

Here $\< is the automatic variable that holds the name of the prerequisite and $@ is the automatic variable that holds the name of the target; see section Automatic Variables.
Each target specified must match the target pattern; a warning is issued for each target that does not. If you have a list of files, only some of which will match the pattern, you can use the filter function to remove nonmatching file names (see section Functions for String Substitution and Analysis):

files = foo.elc bar.o lose.o $(filter %.o,$(files)): %.o: %.c $(CC) -c $(CFLAGS) $< -o $@ $(filter %.elc,$(files)): %.elc: %.el emacs -f batch-byte-compile $<

In this example the result of $(filter %.o,$(files)) is bar.o lose.o, and the first static pattern rule causes each of these object files to be updated by compiling the corresponding C source file. The result of $(filter %.elc,$(files)) is foo.elc, so that file is made from foo.el.
Another example shows how to use $* in static pattern rules:

bigoutput littleoutput : %output : text.g generate text.g -$* > $@

When the generate command is run, $* will expand to the stem, either big or little.

Static Pattern Rules versus Implicit Rules


A static pattern rule has much in common with an implicit rule defined as a pattern rule (see section Defining and Redefining Pattern Rules). Both have a pattern for the target and patterns for constructing the names of prerequisites. The difference is in how make decides when the rule applies.
An implicit rule can apply to any target that matches its pattern, but it does apply only when the target has no commands otherwise specified, and only when the prerequisites can be found. If more than one implicit rule appears applicable, only one applies; the choice depends on the order of rules.
By contrast, a static pattern rule applies to the precise list of targets that you specify in the rule. It cannot apply to any other target and it invariably does apply to each of the targets specified. If two conflicting rules apply, and both have commands, that’s an error.
The static pattern rule can be better than an implicit rule for these reasons:

  • You may wish to override the usual implicit rule for a few files whose names cannot be categorized syntactically but can be given in an explicit list.
  • If you cannot be sure of the precise contents of the directories you are using, you may not be sure which other irrelevant files might lead make to use the wrong implicit rule. The choice might depend on the order in which the implicit rule search is done. With static pattern rules, there is no uncertainty: each rule applies to precisely the targets specified.

Double-Colon Rules


Double-colon rules are rules written with :: instead of : after the target names. They are handled differently from ordinary rules when the same target appears in more than one rule.
When a target appears in multiple rules, all the rules must be the same type: all ordinary, or all double-colon. If they are double-colon, each of them is independent of the others. Each double-colon rule’s commands are executed if the target is older than any prerequisites of that rule. This can result in executing none, any, or all of the double-colon rules.
Double-colon rules with the same target are in fact completely separate from one another. Each double-colon rule is processed individually, just as rules with different targets are processed.
The double-colon rules for a target are executed in the order they appear in the makefile. However, the cases where double-colon rules really make sense are those where the order of executing the commands would not matter.
Double-colon rules are somewhat obscure and not often very useful; they provide a mechanism for cases in which the method used to update a target differs depending on which prerequisite files caused the update, and such cases are rare.
Each double-colon rule should specify commands; if it does not, an implicit rule will be used if one applies. See section Using Implicit Rules.

Generating Prerequisites Automatically


In the makefile for a program, many of the rules you need to write often say only that some object file depends on some header file. For example, if main.c uses defs.h via an #include, you would write:

main.o: defs.h

You need this rule so that make knows that it must remake main.o whenever defs.h changes. You can see that for a large program you would have to write dozens of such rules in your makefile. And, you must always be very careful to update the makefile every time you add or remove an #include.
To avoid this hassle, most modern C compilers can write these rules for you, by looking at the #include lines in the source files. Usually this is done with the -M option to the compiler. For example, the command:

cc -M main.c

generates the output:

main.o : main.c defs.h

Thus you no longer have to write all those rules yourself. The compiler will do it for you.
Note that such a prerequisite constitutes mentioning main.o in a makefile, so it can never be considered an intermediate file by implicit rule search. This means that make won’t ever remove the file after using it; see section Chains of Implicit Rules.
With old make programs, it was traditional practice to use this compiler feature to generate prerequisites on demand with a command like make depend. That command would create a file depend containing all the automatically-generated prerequisites; then the makefile could use include to read them in (see section Including Other Makefiles).
In GNU make, the feature of remaking makefiles makes this practice obsolete–you need never tell make explicitly to regenerate the prerequisites, because it always regenerates any makefile that is out of date. See section How Makefiles Are Remade.
The practice we recommend for automatic prerequisite generation is to have one makefile corresponding to each source file. For each source file name.c there is a makefile name.d which lists what files the object file name.o depends on. That way only the source files that have changed need to be rescanned to produce the new prerequisites.
Here is the pattern rule to generate a file of prerequisites (i.e., a makefile) called name.d from a C source file called name.c:

%.d: %.c set -e; $(CC) -M $(CPPFLAGS) $< \ | sed ’s/\($*\)\.o[ :]*/\1.o $@ : /g’ > $@; \ [ -s $@ ] || rm -f $@

See section Defining and Redefining Pattern Rules, for information on defining pattern rules. The -e flag to the shell makes it exit immediately if the $(CC) command fails (exits with a nonzero status). Normally the shell exits with the status of the last command in the pipeline (sed in this case), so make would not notice a nonzero status from the compiler.
With the GNU C compiler, you may wish to use the -MM flag instead of -M. This omits prerequisites on system header files. See section `Options Controlling the Preprocessor’ in Using GNU CC, for details.
The purpose of the sed command is to translate (for example):

main.o : main.c defs.h

into:

main.o main.d : main.c defs.h

This makes each .d file depend on all the source and header files that the corresponding .o file depends on. make then knows it must regenerate the prerequisites whenever any of the source or header files changes.
Once you’ve defined the rule to remake the .d files, you then use the include directive to read them all in. See section Including Other Makefiles. For example:

sources = foo.c bar.c include $(sources:.c=.d)

(This example uses a substitution variable reference to translate the list of source files foo.c bar.c into a list of prerequisite makefiles, foo.d bar.d. See section Substitution References, for full information on substitution references.) Since the .d files are makefiles like any others, make will remake them as necessary with no further work from you. See section How Makefiles Are Remade.

Writing the Commands in Rules


The commands of a rule consist of shell command lines to be executed one by one. Each command line must start with a tab, except that the first command line may be attached to the target-and-prerequisites line with a semicolon in between. Blank lines and lines of just comments may appear among the command lines; they are ignored. (But beware, an apparently “blank” line that begins with a tab is not blank! It is an empty command; see section Using Empty Commands.)
Users use many different shell programs, but commands in makefiles are always interpreted by /bin/sh unless the makefile specifies otherwise. See section Command Execution.
The shell that is in use determines whether comments can be written on command lines, and what syntax they use. When the shell is /bin/sh, a # starts a comment that extends to the end of the line. The # does not have to be at the beginning of a line. Text on a line before a # is not part of the comment.

Command Echoing


Normally make prints each command line before it is executed. We call this echoing because it gives the appearance that you are typing the commands yourself.
When a line starts with @, the echoing of that line is suppressed. The @ is discarded before the command is passed to the shell. Typically you would use this for a command whose only effect is to print something, such as an echo command to indicate progress through the makefile:

@echo About to make distribution files

When make is given the flag -n or --just-print it only echoes commands, it won’t execute them. See section Summary of Options. In this case and only this case, even the commands starting with @ are printed. This flag is useful for finding out which commands make thinks are necessary without actually doing them.
The -s or --silent flag to make prevents all echoing, as if all commands started with @. A rule in the makefile for the special target .SILENT without prerequisites has the same effect (see section Special Built-in Target Names). .SILENT is essentially obsolete since @ is more flexible.

Command Execution


When it is time to execute commands to update a target, they are executed by making a new subshell for each line. (In practice, make may take shortcuts that do not affect the results.)
Please note: this implies that shell commands such as cd that set variables local to each process will not affect the following command lines. (2)If you want to use cd to affect the next command, put the two on a single line with a semicolon between them. Then make will consider them a single command and pass them, together, to a shell which will execute them in sequence. For example:

foo : bar/lose cd bar; gobble lose > ../foo

If you would like to split a single shell command into multiple lines of text, you must use a backslash at the end of all but the last subline. Such a sequence of lines is combined into a single line, by deleting the backslash-newline sequences, before passing it to the shell. Thus, the following is equivalent to the preceding example:

foo : bar/lose cd bar; \ gobble lose > ../foo

The program used as the shell is taken from the variable SHELL. By default, the program /bin/sh is used.
On MS-DOS, if SHELL is not set, the value of the variable COMSPEC (which is always set) is used instead.
The processing of lines that set the variable SHELL in Makefiles is different on MS-DOS. The stock shell, command.com, is ridiculously limited in its functionality and many users of make tend to install a replacement shell. Therefore, on MS-DOS, make examines the value of SHELL, and changes its behavior based on whether it points to a Unix-style or DOS-style shell. This allows reasonable functionality even if SHELL points to command.com.
If SHELL points to a Unix-style shell, make on MS-DOS additionally checks whether that shell can indeed be found; if not, it ignores the line that sets SHELL. In MS-DOS, GNU make searches for the shell in the following places:

  1. In the precise place pointed to by the value of SHELL. For example, if the makefile specifies SHELL = /bin/sh, make will look in the directory /bin on the current drive.
  2. In the current directory.
  3. In each of the directories in the PATH variable, in order.

In every directory it examines, make will first look for the specific file (sh in the example above). If this is not found, it will also look in that directory for that file with one of the known extensions which identify executable files. For example .exe, .com, .bat, .btm, .sh, and some others.
If any of these attempts is successful, the value of SHELL will be set to the full pathname of the shell as found. However, if none of these is found, the value of SHELL will not be changed, and thus the line that sets it will be effectively ignored. This is so make will only support features specific to a Unix-style shell if such a shell is actually installed on the system where make runs.
Note that this extended search for the shell is limited to the cases where SHELL is set from the Makefile; if it is set in the environment or command line, you are expected to set it to the full pathname of the shell, exactly as things are on Unix.
The effect of the above DOS-specific processing is that a Makefile that says SHELL = /bin/sh (as many Unix makefiles do), will work on MS-DOS unaltered if you have e.g. sh.exe installed in some directory along your PATH.
Unlike most variables, the variable SHELL is never set from the environment. This is because the SHELL environment variable is used to specify your personal choice of shell program for interactive use. It would be very bad for personal choices like this to affect the functioning of makefiles. See section Variables from the Environment. However, on MS-DOS and MS-Windows the value of SHELL in the environment is used, since on those systems most users do not set this variable, and therefore it is most likely set specifically to be used by make. On MS-DOS, if the setting of SHELL is not suitable for make, you can set the variable MAKESHELL to the shell that make should use; this will override the value of SHELL.

Parallel Execution


GNU make knows how to execute several commands at once. Normally, make will execute only one command at a time, waiting for it to finish before executing the next. However, the -j or --jobs option tells make to execute many commands simultaneously.
On MS-DOS, the -j option has no effect, since that system doesn’t support multi-processing.
If the -j option is followed by an integer, this is the number of commands to execute at once; this is called the number of job slots. If there is nothing looking like an integer after the -j option, there is no limit on the number of job slots. The default number of job slots is one, which means serial execution (one thing at a time).
One unpleasant consequence of running several commands simultaneously is that output generated by the commands appears whenever each command sends it, so messages from different commands may be interspersed.
Another problem is that two processes cannot both take input from the same device; so to make sure that only one command tries to take input from the terminal at once, make will invalidate the standard input streams of all but one running command. This means that attempting to read from standard input will usually be a fatal error (a Broken pipe signal) for most child processes if there are several.
It is unpredictable which command will have a valid standard input stream (which will come from the terminal, or wherever you redirect the standard input of make). The first command run will always get it first, and the first command started after that one finishes will get it next, and so on.
We will change how this aspect of make works if we find a better alternative. In the mean time, you should not rely on any command using standard input at all if you are using the parallel execution feature; but if you are not using this feature, then standard input works normally in all commands.
Finally, handling recursive make invocations raises issues. For more information on this, see section Communicating Options to a Sub-make.
If a command fails (is killed by a signal or exits with a nonzero status), and errors are not ignored for that command (see section Errors in Commands), the remaining command lines to remake the same target will not be run. If a command fails and the -k or --keep-going option was not given (see section Summary of Options), make aborts execution. If make terminates for any reason (including a signal) with child processes running, it waits for them to finish before actually exiting.
When the system is heavily loaded, you will probably want to run fewer jobs than when it is lightly loaded. You can use the -l option to tell make to limit the number of jobs to run at once, based on the load average. The -l or --max-load option is followed by a floating-point number. For example,

-l 2.5

will not let make start more than one job if the load average is above 2.5. The -l option with no following number removes the load limit, if one was given with a previous -l option.
More precisely, when make goes to start up a job, and it already has at least one job running, it checks the current load average; if it is not lower than the limit given with -l, make waits until the load average goes below that limit, or until all the other jobs finish.
By default, there is no load limit.

Errors in Commands


After each shell command returns, make looks at its exit status. If the command completed successfully, the next command line is executed in a new shell; after the last command line is finished, the rule is finished.
If there is an error (the exit status is nonzero), make gives up on the current rule, and perhaps on all rules.
Sometimes the failure of a certain command does not indicate a problem. For example, you may use the mkdir command to ensure that a directory exists. If the directory already exists, mkdir will report an error, but you probably want make to continue regardless.
To ignore errors in a command line, write a - at the beginning of the line’s text (after the initial tab). The - is discarded before the command is passed to the shell for execution.
For example,

clean: -rm -f *.o


This causes rm to continue even if it is unable to remove a file.
When you run make with the -i or --ignore-errors flag, errors are ignored in all commands of all rules. A rule in the makefile for the special target .IGNORE has the same effect, if there are no prerequisites. These ways of ignoring errors are obsolete because - is more flexible.
When errors are to be ignored, because of either a - or the -i flag, make treats an error return just like success, except that it prints out a message that tells you the status code the command exited with, and says that the error has been ignored.
When an error happens that make has not been told to ignore, it implies that the current target cannot be correctly remade, and neither can any other that depends on it either directly or indirectly. No further commands will be executed for these targets, since their preconditions have not been achieved.
Normally make gives up immediately in this circumstance, returning a nonzero status. However, if the -k or --keep-going flag is specified, make continues to consider the other prerequisites of the pending targets, remaking them if necessary, before it gives up and returns nonzero status. For example, after an error in compiling one object file, make -k will continue compiling other object files even though it already knows that linking them will be impossible. See section Summary of Options.
The usual behavior assumes that your purpose is to get the specified targets up to date; once make learns that this is impossible, it might as well report the failure immediately. The -k option says that the real purpose is to test as many of the changes made in the program as possible, perhaps to find several independent problems so that you can correct them all before the next attempt to compile. This is why Emacs’ compile command passes the -k flag by default.
Usually when a command fails, if it has changed the target file at all, the file is corrupted and cannot be used–or at least it is not completely updated. Yet the file’s timestamp says that it is now up to date, so the next time make runs, it will not try to update that file. The situation is just the same as when the command is killed by a signal; see section Interrupting or Killing make. So generally the right thing to do is to delete the target file if the command fails after beginning to change the file. make will do this if .DELETE_ON_ERROR appears as a target. This is almost always what you want make to do, but it is not historical practice; so for compatibility, you must explicitly request it.

Interrupting or Killing make

If make gets a fatal signal while a command is executing, it may delete the target file that the command was supposed to update. This is done if the target file’s last-modification time has changed since make first checked it.
The purpose of deleting the target is to make sure that it is remade from scratch when make is next run. Why is this? Suppose you type Ctrl-c while a compiler is running, and it has begun to write an object file foo.o. The Ctrl-c kills the compiler, resulting in an incomplete file whose last-modification time is newer than the source file foo.c. But make also receives the Ctrl-c signal and deletes this incomplete file. If make did not do this, the next invocation of make would think that foo.o did not require updating–resulting in a strange error message from the linker when it tries to link an object file half of which is missing.
You can prevent the deletion of a target file in this way by making the special target .PRECIOUS depend on it. Before remaking a target, make checks to see whether it appears on the prerequisites of .PRECIOUS, and thereby decides whether the target should be deleted if a signal happens. Some reasons why you might do this are that the target is updated in some atomic fashion, or exists only to record a modification-time (its contents do not matter), or must exist at all times to prevent other sorts of trouble.

Recursive Use of make


Recursive use of make means using make as a command in a makefile. This technique is useful when you want separate makefiles for various subsystems that compose a larger system. For example, suppose you have a subdirectory subdir which has its own makefile, and you would like the containing directory’s makefile to run make on the subdirectory. You can do it by writing this:

subsystem: cd subdir && $(MAKE)

or, equivalently, this (see section Summary of Options):

subsystem: $(MAKE) -C subdir


You can write recursive make commands just by copying this example, but there are many things to know about how they work and why, and about how the sub-make relates to the top-level make.
For your convenience, GNU make sets the variable CURDIR to the pathname of the current working directory for you. If -C is in effect, it will contain the path of the new directory, not the original. The value has the same precedence it would have if it were set in the makefile (by default, an environment variable CURDIR will not override this value). Note that setting this variable has no effect on the operation of make

How the MAKE Variable Works


Recursive make commands should always use the variable MAKE, not the explicit command name make, as shown here:

subsystem: cd subdir && $(MAKE)

The value of this variable is the file name with which make was invoked. If this file name was /bin/make, then the command executed is cd subdir && /bin/make. If you use a special version of make to run the top-level makefile, the same special version will be executed for recursive invocations.
As a special feature, using the variable MAKE in the commands of a rule alters the effects of the -t (--touch), -n (--just-print), or -q (--question) option. Using the MAKE variable has the same effect as using a + character at the beginning of the command line. See section Instead of Executing the Commands.
Consider the command make -t in the above example. (The -t option marks targets as up to date without actually running any commands; see section Instead of Executing the Commands.) Following the usual definition of -t, a make -t command in the example would create a file named subsystem and do nothing else. What you really want it to do is run cd subdir && make -t; but that would require executing the command, and -t says not to execute commands.
The special feature makes this do what you want: whenever a command line of a rule contains the variable MAKE, the flags -t, -n and -q do not apply to that line. Command lines containing MAKE are executed normally despite the presence of a flag that causes most commands not to be run. The usual MAKEFLAGS mechanism passes the flags to the sub-make (see section Communicating Options to a Sub-make), so your request to touch the files, or print the commands, is propagated to the subsystem.

Communicating Variables to a Sub-make


Variable values of the top-level make can be passed to the sub-make through the environment by explicit request. These variables are defined in the sub-make as defaults, but do not override what is specified in the makefile used by the sub-make makefile unless you use the -e switch (see section Summary of Options).
To pass down, or export, a variable, make adds the variable and its value to the environment for running each command. The sub-make, in turn, uses the environment to initialize its table of variable values. See section Variables from the Environment.
Except by explicit request, make exports a variable only if it is either defined in the environment initially or set on the command line, and if its name consists only of letters, numbers, and underscores. Some shells cannot cope with environment variable names consisting of characters other than letters, numbers, and underscores.
The special variables SHELL and MAKEFLAGS are always exported (unless you unexport them). MAKEFILES is exported if you set it to anything.
make automatically passes down variable values that were defined on the command line, by putting them in the MAKEFLAGS variable. See the next section.
Variables are not normally passed down if they were created by default by make (see section Variables Used by Implicit Rules). The sub-make will define these for itself.
If you want to export specific variables to a sub-make, use the export directive, like this:

export variable …

If you want to prevent a variable from being exported, use the unexport directive, like this:

unexport variable …

As a convenience, you can define a variable and export it at the same time by doing:

export variable = value

has the same result as:

variable = value export variable

and

export variable := value

has the same result as:

variable := value export variable

Likewise,

export variable += value

is just like:

variable += value export variable

See section Appending More Text to Variables.
You may notice that the export and unexport directives work in make in the same way they work in the shell, sh.
If you want all variables to be exported by default, you can use export by itself:

export

This tells make that variables which are not explicitly mentioned in an export or unexport directive should be exported. Any variable given in an unexport directive will still not be exported. If you use export by itself to export variables by default, variables whose names contain characters other than alphanumerics and underscores will not be exported unless specifically mentioned in an export directive.
The behavior elicited by an export directive by itself was the default in older versions of GNU make. If your makefiles depend on this behavior and you want to be compatible with old versions of make, you can write a rule for the special target .EXPORT_ALL_VARIABLES instead of using the export directive. This will be ignored by old makes, while the export directive will cause a syntax error.
Likewise, you can use unexport by itself to tell make not to export variables by default. Since this is the default behavior, you would only need to do this if export had been used by itself earlier (in an included makefile, perhaps). You cannot use export and unexport by themselves to have variables exported for some commands and not for others. The last export or unexport directive that appears by itself determines the behavior for the entire run of make.
As a special feature, the variable MAKELEVEL is changed when it is passed down from level to level. This variable’s value is a string which is the depth of the level as a decimal number. The value is 0 for the top-level make; 1 for a sub-make, 2 for a sub-sub-make, and so on. The incrementation happens when make sets up the environment for a command.
The main use of MAKELEVEL is to test it in a conditional directive (see section Conditional Parts of Makefiles); this way you can write a makefile that behaves one way if run recursively and another way if run directly by you.
You can use the variable MAKEFILES to cause all sub-make commands to use additional makefiles. The value of MAKEFILES is a whitespace-separated list of file names. This variable, if defined in the outer-level makefile, is passed down through the environment; then it serves as a list of extra makefiles for the sub-make to read before the usual or specified ones. See section The Variable MAKEFILES.

Communicating Options to a Sub-make


Flags such as -s and -k are passed automatically to the sub-make through the variable MAKEFLAGS. This variable is set up automatically by make to contain the flag letters that make received. Thus, if you do make -ks then MAKEFLAGS gets the value ks.
As a consequence, every sub-make gets a value for MAKEFLAGS in its environment. In response, it takes the flags from that value and processes them as if they had been given as arguments. See section Summary of Options.
Likewise variables defined on the command line are passed to the sub-make through MAKEFLAGS. Words in the value of MAKEFLAGS that contain =, make treats as variable definitions just as if they appeared on the command line. See section Overriding Variables.
The options -C, -f, -o, and -W are not put into MAKEFLAGS; these options are not passed down.
The -j option is a special case (see section Parallel Execution). If you set it to some numeric value N and your operating system supports it (most any UNIX system will; others typically won’t), the parent make and all the sub-makes will communicate to ensure that there are only N jobs running at the same time between them all. Note that any job that is marked recursive (see section Instead of Executing the Commands) doesn’t count against the total jobs (otherwise we could get N sub-makes running and have no slots left over for any real work!)
If your operating system doesn’t support the above communication, then -j 1 is always put into MAKEFLAGS instead of the value you specified. This is because if the -j option were passed down to sub-makes, you would get many more jobs running in parallel than you asked for. If you give -j with no numeric argument, meaning to run as many jobs as possible in parallel, this is passed down, since multiple infinities are no more than one.
If you do not want to pass the other flags down, you must change the value of MAKEFLAGS, like this:

subsystem: cd subdir && $(MAKE) MAKEFLAGS=

The command line variable definitions really appear in the variable MAKEOVERRIDES, and MAKEFLAGS contains a reference to this variable. If you do want to pass flags down normally, but don’t want to pass down the command line variable definitions, you can reset MAKEOVERRIDES to empty, like this:

MAKEOVERRIDES =

This is not usually useful to do. However, some systems have a small fixed limit on the size of the environment, and putting so much information into the value of MAKEFLAGS can exceed it. If you see the error message Arg list too long, this may be the problem. (For strict compliance with POSIX.2, changing MAKEOVERRIDES does not affect MAKEFLAGS if the special target .POSIX appears in the makefile. You probably do not care about this.)
A similar variable MFLAGS exists also, for historical compatibility. It has the same value as MAKEFLAGS except that it does not contain the command line variable definitions, and it always begins with a hyphen unless it is empty (MAKEFLAGS begins with a hyphen only when it begins with an option that has no single-letter version, such as --warn-undefined-variables). MFLAGS was traditionally used explicitly in the recursive make command, like this:

subsystem: cd subdir && $(MAKE) $(MFLAGS)

but now MAKEFLAGS makes this usage redundant. If you want your makefiles to be compatible with old make programs, use this technique; it will work fine with more modern make versions too.
The MAKEFLAGS variable can also be useful if you want to have certain options, such as -k (see section Summary of Options), set each time you run make. You simply put a value for MAKEFLAGS in your environment. You can also set MAKEFLAGS in a makefile, to specify additional flags that should also be in effect for that makefile. (Note that you cannot use MFLAGS this way. That variable is set only for compatibility; make does not interpret a value you set for it in any way.)
When make interprets the value of MAKEFLAGS (either from the environment or from a makefile), it first prepends a hyphen if the value does not already begin with one. Then it chops the value into words separated by blanks, and parses these words as if they were options given on the command line (except that -C, -f, -h, -o, -W, and their long-named versions are ignored; and there is no error for an invalid option).
If you do put MAKEFLAGS in your environment, you should be sure not to include any options that will drastically affect the actions of make and undermine the purpose of makefiles and of make itself. For instance, the -t, -n, and -q options, if put in one of these variables, could have disastrous consequences and would certainly have at least surprising and probably annoying effects.

The `--print-directory` Option


If you use several levels of recursive make invocations, the -w or --print-directory option can make the output a lot easier to understand by showing each directory as make starts processing it and as make finishes processing it. For example, if make -w is run in the directory /u/gnu/make, make will print a line of the form:

make: Entering directory `/u/gnu/make'.

before doing anything else, and a line of the form:

make: Leaving directory `/u/gnu/make'.

when processing is completed.

Normally, you do not need to specify this option because make does it for you: -w is turned on automatically when you use the -C option, and in sub-makes. make will not automatically turn on -w if you also use -s, which says to be silent, or if you use --no-print-directory to explicitly disable it.

Defining Canned Command Sequences

 

When the same sequence of commands is useful in making various targets, you can define it as a canned sequence with the define directive, and refer to the canned sequence from the rules for those targets. The canned sequence is actually a variable, so the name must not conflict with other variable names.

Here is an example of defining a canned sequence of commands:

define run-yaccyacc $(firstword $^)mv y.tab.c $@endef

 

Here run-yacc is the name of the variable being defined; endef marks the end of the definition; the lines in between are the commands. The define directive does not expand variable references and function calls in the canned sequence; the $ characters, parentheses, variable names, and so on, all become part of the value of the variable you are defining. See section Defining Variables Verbatim, for a complete explanation of define.

The first command in this example runs Yacc on the first prerequisite of whichever rule uses the canned sequence. The output file from Yacc is always named y.tab.c. The second command moves the output to the rule’s target file name.

To use the canned sequence, substitute the variable into the commands of a rule. You can substitute it like any other variable (see section Basics of Variable References). Because variables defined by define are recursively expanded variables, all the variable references you wrote inside the define are expanded now. For example:

foo.c : foo.y $(run-yacc)

foo.y will be substituted for the variable $^ when it occurs in run-yacc’s value, and foo.c for $@.

This is a realistic example, but this particular one is not needed in practice because make has an implicit rule to figure out these commands based on the file names involved (see section Using Implicit Rules).

In command execution, each line of a canned sequence is treated just as if the line appeared on its own in the rule, preceded by a tab. In particular, make invokes a separate subshell for each line. You can use the special prefix characters that affect command lines (@, -, and +) on each line of a canned sequence. See section Writing the Commands in Rules. For example, using this canned sequence:

define frobnicate@echo “frobnicating target $@“frob-step-1 $< -o $@-step-1 frob-step-2 $@-step-1 -o $@endef

make will not echo the first line, the echo command. But it will echo the following two command lines.

On the other hand, prefix characters on the command line that refers to a canned sequence apply to every line in the sequence. So the rule:

frob.out: frob.in @$(frobnicate)

does not echo any commands. (See section Command Echoing, for a full explanation of @.)

Using Empty Commands

 

It is sometimes useful to define commands which do nothing. This is done simply by giving a command that consists of nothing but whitespace. For example:

target: ;

defines an empty command string for target. You could also use a line beginning with a tab character to define an empty command string, but this would be confusing because such a line looks empty.

You may be wondering why you would want to define a command string that does nothing. The only reason this is useful is to prevent a target from getting implicit commands (from implicit rules or the .DEFAULT special target; see section Using Implicit Rulesand see section Defining Last-Resort Default Rules).

You may be inclined to define empty command strings for targets that are not actual files, but only exist so that their prerequisites can be remade. However, this is not the best way to do that, because the prerequisites may not be remade properly if the target file actually does exist. See section Phony Targets, for a better way to do this.

How to Use Variables

 

A variable is a name defined in a makefile to represent a string of text, called the variable’s value. These values are substituted by explicit request into targets, prerequisites, commands, and other parts of the makefile. (In some other versions of make, variables are called macros.)

Variables and functions in all parts of a makefile are expanded when read, except for the shell commands in rules, the right-hand sides of variable definitions using =, and the bodies of variable definitions using the define directive.

Variables can represent lists of file names, options to pass to compilers, programs to run, directories to look in for source files, directories to write output in, or anything else you can imagine.

A variable name may be any sequence of characters not containing :, #, =, or leading or trailing whitespace. However, variable names containing characters other than letters, numbers, and underscores should be avoided, as they may be given special meanings in the future, and with some shells they cannot be passed through the environment to a sub-make (see section Communicating Variables to a Sub-make).

Variable names are case-sensitive. The names foo, FOO, and Foo all refer to different variables.

It is traditional to use upper case letters in variable names, but we recommend using lower case letters for variable names that serve internal purposes in the makefile, and reserving upper case for parameters that control implicit rules or for parameters that the user should override with command options (see section Overriding Variables).

A few variables have names that are a single punctuation character or just a few characters. These are the automatic variables, and they have particular specialized uses. See section Automatic Variables.

Basics of Variable References

 

To substitute a variable’s value, write a dollar sign followed by the name of the variable in parentheses or braces: either $(foo) or ${foo} is a valid reference to the variable foo. This special significance of $ is why you must write $ to have the effect of a single dollar sign in a file name or command.

Variable references can be used in any context: targets, prerequisites, commands, most directives, and new variable values. Here is an example of a common case, where a variable holds the names of all the object files in a program:

objects = program.o foo.o utils.oprogram : $(objects) cc -o program $(objects)$(objects) : defs.h

Variable references work by strict textual substitution. Thus, the rule

foo = cprog.o : prog.$(foo) $(foo)$(foo) -$(foo) prog.$(foo)

could be used to compile a C program prog.c. Since spaces before the variable value are ignored in variable assignments, the value of foo is precisely c. (Don’t actually write your makefiles this way!)

A dollar sign followed by a character other than a dollar sign, open-parenthesis or open-brace treats that single character as the variable name. Thus, you could reference the variable x with $x. However, this practice is strongly discouraged, except in the case of the automatic variables (see section Automatic Variables).

The Two Flavors of Variables

 

There are two ways that a variable in GNU make can have a value; we call them the two flavors of variables. The two flavors are distinguished in how they are defined and in what they do when expanded.

The first flavor of variable is a recursively expanded variable. Variables of this sort are defined by lines using = (see section Setting Variables) or by the define directive (see section Defining Variables Verbatim). The value you specify is installed verbatim; if it contains references to other variables, these references are expanded whenever this variable is substituted (in the course of expanding some other string). When this happens, it is called recursive expansion.

For example,

foo = $(bar)bar = $(ugh)ugh = Huh?all:;echo $(foo)

will echo Huh?: $(foo) expands to $(bar) which expands to $(ugh) which finally expands to Huh?.

This flavor of variable is the only sort supported by other versions of make. It has its advantages and its disadvantages. An advantage (most would say) is that:

CFLAGS = $(include_dirs) -Oinclude_dirs = -Ifoo -Ibar

will do what was intended: when CFLAGS is expanded in a command, it will expand to -Ifoo -Ibar -O. A major disadvantage is that you cannot append something on the end of a variable, as in

CFLAGS = $(CFLAGS) -O

because it will cause an infinite loop in the variable expansion. (Actually make detects the infinite loop and reports an error.)

Another disadvantage is that any functions (see section Functions for Transforming Text) referenced in the definition will be executed every time the variable is expanded. This makes make run slower; worse, it causes the wildcard and shell functions to give unpredictable results because you cannot easily control when they are called, or even how many times.

To avoid all the problems and inconveniences of recursively expanded variables, there is another flavor: simply expanded variables.

Simply expanded variables are defined by lines using := (see section Setting Variables). The value of a simply expanded variable is scanned once and for all, expanding any references to other variables and functions, when the variable is defined. The actual value of the simply expanded variable is the result of expanding the text that you write. It does not contain any references to other variables; it contains their values as of the time this variable was defined. Therefore,

x := fooy := $(x) barx := later

is equivalent to

y := foo barx := later

When a simply expanded variable is referenced, its value is substituted verbatim.

Here is a somewhat more complicated example, illustrating the use of := in conjunction with the shell function. (See section The shell Function.) This example also shows use of the variable MAKELEVEL, which is changed when it is passed down from level to level. (See section Communicating Variables to a Sub-make, for information about MAKELEVEL.)

 

ifeq (0,${MAKELEVEL})cur-dir := $(shell pwd)whoami := $(shell whoami) host-type := $(shell arch) MAKE := ${MAKE} host-type=${host-type} whoami=${whoami}endif

An advantage of this use of := is that a typical `descend into a directory’ command then looks like this:

${subdirs}: ${MAKE} cur-dir=${cur-dir}/$@ -C $@ all

Simply expanded variables generally make complicated makefile programming more predictable because they work like variables in most programming languages. They allow you to redefine a variable using its own value (or its value processed in some way by one of the expansion functions) and to use the expansion functions much more efficiently (see section Functions for Transforming Text).

You can also use them to introduce controlled leading whitespace into variable values. Leading whitespace characters are discarded from your input before substitution of variable references and function calls; this means you can include leading spaces in a variable value by protecting them with variable references, like this:

nullstring :=space := $(nullstring) # end of the line

Here the value of the variable space is precisely one space. The comment # end of the line is included here just for clarity. Since trailing space characters are not stripped from variable values, just a space at the end of the line would have the same effect (but be rather hard to read). If you put whitespace at the end of a variable value, it is a good idea to put a comment like that at the end of the line to make your intent clear. Conversely, if you do not want any whitespace characters at the end of your variable value, you must remember not to put a random comment on the end of the line after some whitespace, such as this:

dir := /foo/bar # directory to put the frobs in

Here the value of the variable dir is /foo/bar (with four trailing spaces), which was probably not the intention. (Imagine something like $(dir)/file with this definition!)

There is another assignment operator for variables, ?=. This is called a conditional variable assignment operator, because it only has an effect if the variable is not yet defined. This statement:

FOO ?= bar

is exactly equivalent to this (see section The origin Function):

ifeq ($(origin FOO), undefined) FOO = barendif

Note that a variable set to an empty value is still defined, so ?= will not set that variable.

Advanced Features for Reference to Variables

 

This section describes some advanced features you can use to reference variables in more flexible ways.

Substitution References

 

A substitution reference substitutes the value of a variable with alterations that you specify. It has the form $(var:a\=b) (or ${var:a\=b}) and its meaning is to take the value of the variable var, replace every a at the end of a word with b in that value, and substitute the resulting string.

When we say “at the end of a word”, we mean that a must appear either followed by whitespace or at the end of the value in order to be replaced; other occurrences of a in the value are unaltered. For example:

foo := a.o b.o c.obar := $(foo:.o=.c)

sets bar to a.c b.c c.c. See section Setting Variables.

A substitution reference is actually an abbreviation for use of the patsubst expansion function (see section Functions for String Substitution and Analysis). We provide substitution references as well as patsubst for compatibility with other implementations of make.

Another type of substitution reference lets you use the full power of the patsubst function. It has the same form $(var:a\=b) described above, except that now a must contain a single % character. This case is equivalent to $(patsubst a,b,$(var)). See section Functions for String Substitution and Analysis, for a description of the patsubst function.

For example:foo := a.o b.o c.obar := $(foo:%.o=%.c)

sets bar to a.c b.c c.c.

Computed Variable Names

 

Computed variable names are a complicated concept needed only for sophisticated makefile programming. For most purposes you need not consider them, except to know that making a variable with a dollar sign in its name might have strange results. However, if you are the type that wants to understand everything, or you are actually interested in what they do, read on.

Variables may be referenced inside the name of a variable. This is called a computed variable name or a nested variable reference. For example,

x = yy = za := $($(x))

defines a as z: the $(x) inside $($(x)) expands to y, so $($(x)) expands to $(y) which in turn expands to z. Here the name of the variable to reference is not stated explicitly; it is computed by expansion of $(x). The reference $(x) here is nested within the outer variable reference.

The previous example shows two levels of nesting, but any number of levels is possible. For example, here are three levels:

x = yy = zz = ua := $($($(x)))

Here the innermost $(x) expands to y, so $($(x)) expands to $(y) which in turn expands to z; now we have $(z), which becomes u.

References to recursively-expanded variables within a variable name are reexpanded in the usual fashion. For example:

x = $(y)y = zz = Helloa := $($(x))

defines a as Hello: $($(x)) becomes $($(y)) which becomes $(z) which becomes Hello.

Nested variable references can also contain modified references and function invocations (see section Functions for Transforming Text), just like any other reference. For example, using the subst function (see section Functions for String Substitution and Analysis):

x = variable1variable2 := Helloy = $(subst 1,2,$(x))z = ya := $($($(z)))

eventually defines a as Hello. It is doubtful that anyone would ever want to write a nested reference as convoluted as this one, but it works: $($($(z))) expands to $($(y)) which becomes $($(subst 1,2,$(x))). This gets the value variable1 from x and changes it by substitution to variable2, so that the entire string becomes $(variable2), a simple variable reference whose value is Hello.

A computed variable name need not consist entirely of a single variable reference. It can contain several variable references, as well as some invariant text. For example,

a_dirs := dira dirb1_dirs := dir1 dir2a_files := filea fileb 1_files := file1 file2ifeq “$(use_a)” “yes"a1 := aelsea1 := 1endif ifeq “$(use_dirs)” “yes"df := dirselsedf := filesendifdirs := $($(a1)_$(df))

will give dirs the same value as a_dirs, 1_dirs, a_files or 1_files depending on the settings of use_a and use_dirs.

Computed variable names can also be used in substitution references:

a_objects := a.o b.o c.o1_objects := 1.o 2.o 3.o sources := $($(a1)_objects:.o=.c)

defines sources as either a.c b.c c.c or 1.c 2.c 3.c, depending on the value of a1.

The only restriction on this sort of use of nested variable references is that they cannot specify part of the name of a function to be called. This is because the test for a recognized function name is done before the expansion of nested references. For example,

ifdef do_sortfunc := sortelsefunc := stripendifbar := a d b g q c foo := $($(func) $(bar))

attempts to give foo the value of the variable sort a d b g q c or strip a d b g q c, rather than giving a d b g q c as the argument to either the sort or the strip function. This restriction could be removed in the future if that change is shown to be a good idea.

You can also use computed variable names in the left-hand side of a variable assignment, or in a define directive, as in:

dir = foo$(dir)_sources := $(wildcard $(dir)/*.c)define $(dir)_print lpr $($(dir)_sources)endef

This example defines the variables dir, foo\_sources, and foo\_print.

Note that nested variable references are quite different from recursively expanded variables (see section The Two Flavors of Variables), though both are used together in complex ways when doing makefile programming.

How Variables Get Their Values

 

Variables can get values in several different ways:

Setting Variables

 

To set a variable from the makefile, write a line starting with the variable name followed by = or :=. Whatever follows the = or := on the line becomes the value. For example,

objects = main.o foo.o bar.o utils.o

defines a variable named objects. Whitespace around the variable name and immediately after the = is ignored.

Variables defined with = are recursively expanded variables. Variables defined with := are simply expanded variables; these definitions can contain variable references which will be expanded before the definition is made. See section The Two Flavors of Variables.

The variable name may contain function and variable references, which are expanded when the line is read to find the actual variable name to use.

There is no limit on the length of the value of a variable except the amount of swapping space on the computer. When a variable definition is long, it is a good idea to break it into several lines by inserting backslash-newline at convenient places in the definition. This will not affect the functioning of make, but it will make the makefile easier to read.

Most variable names are considered to have the empty string as a value if you have never set them. Several variables have built-in initial values that are not empty, but you can set them in the usual ways (see section Variables Used by Implicit Rules). Several special variables are set automatically to a new value for each rule; these are called the automatic variables (see section Automatic Variables).

If you’d like a variable to be set to a value only if it’s not already set, then you can use the shorthand operator ?= instead of =. These two settings of the variable FOO are identical (see section The origin Function):

FOO ?= bar

and

ifeq ($(origin FOO), undefined)FOO = barendif

Appending More Text to Variables

 

Often it is useful to add more text to the value of a variable already defined. You do this with a line containing +=, like this:

objects += another.o

This takes the value of the variable objects, and adds the text another.o to it (preceded by a single space). Thus:

objects = main.o foo.o bar.o utils.oobjects += another.o

sets objects to main.o foo.o bar.o utils.o another.o.

Using += is similar to:

objects = main.o foo.o bar.o utils.oobjects := $(objects) another.o

but differs in ways that become important when you use more complex values.

When the variable in question has not been defined before, += acts just like normal =: it defines a recursively-expanded variable. However, when there is a previous definition, exactly what += does depends on what flavor of variable you defined originally. See section The Two Flavors of Variables, for an explanation of the two flavors of variables.

When you add to a variable’s value with +=, make acts essentially as if you had included the extra text in the initial definition of the variable. If you defined it first with :=, making it a simply-expanded variable, += adds to that simply-expanded definition, and expands the new text before appending it to the old value just as := does (see section Setting Variables, for a full explanation of :=). In fact,

variable := valuevariable += more

is exactly equivalent to:

variable := valuevariable := $(variable) more

On the other hand, when you use += with a variable that you defined first to be recursively-expanded using plain =, make does something a bit different. Recall that when you define a recursively-expanded variable, make does not expand the value you set for variable and function references immediately. Instead it stores the text verbatim, and saves these variable and function references to be expanded later, when you refer to the new variable (see section The Two Flavors of Variables). When you use += on a recursively-expanded variable, it is this unexpanded text to which make appends the new text you specify.

variable = valuevariable += more

is roughly equivalent to:

temp = valuevariable = $(temp) more

except that of course it never defines a variable called temp. The importance of this comes when the variable’s old value contains variable references. Take this common example:

CFLAGS = $(includes) -O…CFLAGS += -pg # enable profiling

The first line defines the CFLAGS variable with a reference to another variable, includes. (CFLAGS is used by the rules for C compilation; see section Catalogue of Implicit Rules.) Using = for the definition makes CFLAGS a recursively-expanded variable, meaning $(includes) -O is not expanded when make processes the definition of CFLAGS. Thus, includes need not be defined yet for its value to take effect. It only has to be defined before any reference to CFLAGS. If we tried to append to the value of CFLAGS without using +=, we might do it like this:

CFLAGS := $(CFLAGS) -pg # enable profiling

This is pretty close, but not quite what we want. Using := redefines CFLAGS as a simply-expanded variable; this means make expands the text $(CFLAGS) -pg before setting the variable. If includes is not yet defined, we get -O -pg, and a later definition of includes will have no effect. Conversely, by using += we set CFLAGS to the unexpanded value $(includes) -O -pg. Thus we preserve the reference to includes, so if that variable gets defined at any later point, a reference like $(CFLAGS) still uses its value.

The override Directive

 

If a variable has been set with a command argument (see section Overriding Variables), then ordinary assignments in the makefile are ignored. If you want to set the variable in the makefile even though it was set with a command argument, you can use an override directive, which is a line that looks like this:

override variable = value

or

override variable := value

To append more text to a variable defined on the command line, use:

override variable += more text

See section Appending More Text to Variables.

The override directive was not invented for escalation in the war between makefiles and command arguments. It was invented so you can alter and add to values that the user specifies with command arguments.

For example, suppose you always want the -g switch when you run the C compiler, but you would like to allow the user to specify the other switches with a command argument just as usual. You could use this override directive:

override CFLAGS += -g

You can also use override directives with define directives. This is done as you might expect:

override define foobarendef

See the next section for information about define.

Defining Variables Verbatim

 

Another way to set the value of a variable is to use the define directive. This directive has an unusual syntax which allows newline characters to be included in the value, which is convenient for defining canned sequences of commands (see section Defining Canned Command Sequences).

The define directive is followed on the same line by the name of the variable and nothing more. The value to give the variable appears on the following lines. The end of the value is marked by a line containing just the word endef. Aside from this difference in syntax, define works just like =: it creates a recursively-expanded variable (see section The Two Flavors of Variables). The variable name may contain function and variable references, which are expanded when the directive is read to find the actual variable name to use.

define two-linesecho fooecho $(bar)endef

The value in an ordinary assignment cannot contain a newline; but the newlines that separate the lines of the value in a define become part of the variable’s value (except for the final newline which precedes the endef and is not considered part of the value).

The previous example is functionally equivalent to this:

two-lines = echo foo; echo $(bar)

since two commands separated by semicolon behave much like two separate shell commands. However, note that using two separate lines means make will invoke the shell twice, running an independent subshell for each line. See section Command Execution.

If you want variable definitions made with define to take precedence over command-line variable definitions, you can use the override directive together with define:

override define two-linesfoo$(bar)endef

See section The override Directive.

Variables from the Environment

Variables in make can come from the environment in which make is run. Every environment variable that make sees when it starts up is transformed into a make variable with the same name and value. But an explicit assignment in the makefile, or with a command argument, overrides the environment. (If the -e flag is specified, then values from the environment override assignments in the makefile. See section Summary of Options. But this is not recommended practice.)

Thus, by setting the variable CFLAGS in your environment, you can cause all C compilations in most makefiles to use the compiler switches you prefer. This is safe for variables with standard or conventional meanings because you know that no makefile will use them for other things. (But this is not totally reliable; some makefiles set CFLAGS explicitly and therefore are not affected by the value in the environment.)

When make is invoked recursively, variables defined in the outer invocation can be passed to inner invocations through the environment (see section Recursive Use of make). By default, only variables that came from the environment or the command line are passed to recursive invocations. You can use the export directive to pass other variables. See section Communicating Variables to a Sub-make, for full details.

Other use of variables from the environment is not recommended. It is not wise for makefiles to depend for their functioning on environment variables set up outside their control, since this would cause different users to get different results from the same makefile. This is against the whole purpose of most makefiles.

Such problems would be especially likely with the variable SHELL, which is normally present in the environment to specify the user’s choice of interactive shell. It would be very undesirable for this choice to affect make. So make ignores the environment value of SHELL (except on MS-DOS and MS-Windows, where SHELL is usually not set. See section Command Execution.)

Target-specific Variable Values

Variable values in make are usually global; that is, they are the same regardless of where they are evaluated (unless they’re reset, of course). One exception to that is automatic variables (see section Automatic Variables).

The other exception is target-specific variable values. This feature allows you to define different values for the same variable, based on the target that make is currently building. As with automatic variables, these values are only available within the context of a target’s command script (and in other target-specific assignments).

Set a target-specific variable value like this:

target … : variable-assignment

or like this:

target … : override variable-assignment

Multiple target values create a target-specific variable value for each member of the target list individually.

The variable-assignment can be any valid form of assignment; recursive (=), static (:=), appending (+=), or conditional (?=). All variables that appear within the variable-assignment are evaluated within the context of the target: thus, any previously-defined target-specific variable values will be in effect. Note that this variable is actually distinct from any “global” value: the two variables do not have to have the same flavor (recursive vs. static).

Target-specific variables have the same priority as any other makefile variable. Variables provided on the command-line (and in the environment if the -e option is in force) will take precedence. Specifying the override directive will allow the target-specific variable value to be preferred.

There is one more special feature of target-specific variables: when you define a target-specific variable, that variable value is also in effect for all prerequisites of this target (unless those prerequisites override it with their own target-specific variable value). So, for example, a statement like this:

prog : CFLAGS = -gprog : prog.o foo.o bar.o

will set CFLAGS to -g in the command script for prog, but it will also set CFLAGS to -g in the command scripts that create prog.o, foo.o, and bar.o, and any command scripts which create their prerequisites.

Pattern-specific Variable Values

 

In addition to target-specific variable values (see section Target-specific Variable Values), GNU make supports pattern-specific variable values. In this form, a variable is defined for any target that matches the pattern specified. Variables defined in this way are searched after any target-specific variables defined explicitly for that target, and before target-specific variables defined for the parent target.

Set a pattern-specific variable value like this:

pattern … : variable-assignment

or like this:

pattern … : override variable-assignment

where pattern is a %-pattern. As with target-specific variable values, multiple pattern values create a pattern-specific variable value for each pattern individually. The variable-assignment can be any valid form of assignment. Any command-line variable setting will take precedence, unless override is specified.

For example:

%.o : CFLAGS = -O

will assign CFLAGS the value of -O for all targets matching the pattern %.o.

Conditional Parts of Makefiles

A conditional causes part of a makefile to be obeyed or ignored depending on the values of variables. Conditionals can compare the value of one variable to another, or the value of a variable to a constant string. Conditionals control what make actually “sees” in the makefile, so they cannot be used to control shell commands at the time of execution.

Example of a Conditional

The following example of a conditional tells make to use one set of libraries if the CC variable is gcc, and a different set of libraries otherwise. It works by controlling which of two command lines will be used as the command for a rule. The result is that CC=gcc as an argument to make changes not only which compiler is used but also which libraries are linked.

libs_for_gcc = -lgnunormal_libs =foo: $(objects)ifeq ($(CC),gcc) $(CC) -o foo $(objects) $(libs_for_gcc)else $(CC) -o foo $(objects) $(normal_libs)endif

This conditional uses three directives: one ifeq, one else and one endif.

The ifeq directive begins the conditional, and specifies the condition. It contains two arguments, separated by a comma and surrounded by parentheses. Variable substitution is performed on both arguments and then they are compared. The lines of the makefile following the ifeq are obeyed if the two arguments match; otherwise they are ignored.

The else directive causes the following lines to be obeyed if the previous conditional failed. In the example above, this means that the second alternative linking command is used whenever the first alternative is not used. It is optional to have an else in a conditional.

The endif directive ends the conditional. Every conditional must end with an endif. Unconditional makefile text follows.

As this example illustrates, conditionals work at the textual level: the lines of the conditional are treated as part of the makefile, or ignored, according to the condition. This is why the larger syntactic units of the makefile, such as rules, may cross the beginning or the end of the conditional.

When the variable CC has the value gcc, the above example has this effect:

foo: $(objects) $(CC) -o foo $(objects) $(libs_for_gcc)

When the variable CC has any other value, the effect is this:

foo: $(objects) $(CC) -o foo $(objects) $(normal_libs)

Equivalent results can be obtained in another way by conditionalizing a variable assignment and then using the variable unconditionally:

libs_for_gcc = -lgnunormal_libs =ifeq ($(CC),gcc) libs=$(libs_for_gcc)else libs=$(normal_libs)endiffoo: $(objects) $(CC) -o foo $(objects) $(libs)

Syntax of Conditionals

 

The syntax of a simple conditional with no else is as follows:

conditional-directive text-if-trueendif

The text-if-true may be any lines of text, to be considered as part of the makefile if the condition is true. If the condition is false, no text is used instead.

The syntax of a complex conditional is as follows:

conditional-directive text-if-trueelse text-if-falseendif

If the condition is true, text-if-true is used; otherwise, text-if-false is used instead. The text-if-false can be any number of lines of text.

The syntax of the conditional-directive is the same whether the conditional is simple or complex. There are four different directives that test different conditions. Here is a table of them:

ifeq (arg1, arg2)

ifeq ‘arg1’ ‘arg2’

ifeq “arg1” “arg2”

ifeq “arg1” ‘arg2’

ifeq ‘arg1’ “arg2”

Expand all variable references in arg1 and arg2 and compare them. If they are identical, the text-if-true is effective; otherwise, the text-if-false, if any, is effective. Often you want to test if a variable has a non-empty value. When the value results from complex expansions of variables and functions, expansions you would consider empty may actually contain whitespace characters and thus are not seen as empty. However, you can use the strip function (see section Functions for String Substitution and Analysis) to avoid interpreting whitespace as a non-empty value. For example:

ifeq ($(strip $(foo)),)text-if-emptyendif

will evaluate text-if-empty even if the expansion of $(foo) contains whitespace characters.

ifneq (arg1, arg2)

ifneq ‘arg1’ ‘arg2’

ifneq “arg1” “arg2”

ifneq “arg1” ‘arg2’

ifneq ‘arg1’ “arg2”

Expand all variable references in arg1 and arg2 and compare them. If they are different, the text-if-true is effective; otherwise, the text-if-false, if any, is effective.

ifdef variable-name

If the variable variable-name has a non-empty value, the text-if-true is effective; otherwise, the text-if-false, if any, is effective. Variables that have never been defined have an empty value. Note that ifdef only tests whether a variable has a value. It does not expand the variable to see if that value is nonempty. Consequently, tests using ifdef return true for all definitions except those like foo =. To test for an empty value, use ifeq ($(foo),). For example,

bar =foo = $(bar)ifdef foofrobozz = yeselsefrobozz = noendif

sets frobozz to yes, while:

foo =ifdef foofrobozz = yeselsefrobozz = noendif

sets frobozz to no.

ifndef variable-name

If the variable variable-name has an empty value, the text-if-true is effective; otherwise, the text-if-false, if any, is effective.

Extra spaces are allowed and ignored at the beginning of the conditional directive line, but a tab is not allowed. (If the line begins with a tab, it will be considered a command for a rule.) Aside from this, extra spaces or tabs may be inserted with no effect anywhere except within the directive name or within an argument. A comment starting with # may appear at the end of the line.

The other two directives that play a part in a conditional are else and endif. Each of these directives is written as one word, with no arguments. Extra spaces are allowed and ignored at the beginning of the line, and spaces or tabs at the end. A comment starting with # may appear at the end of the line.

Conditionals affect which lines of the makefile make uses. If the condition is true, make reads the lines of the text-if-true as part of the makefile; if the condition is false, make ignores those lines completely. It follows that syntactic units of the makefile, such as rules, may safely be split across the beginning or the end of the conditional.

make evaluates conditionals when it reads a makefile. Consequently, you cannot use automatic variables in the tests of conditionals because they are not defined until commands are run (see section Automatic Variables).

To prevent intolerable confusion, it is not permitted to start a conditional in one makefile and end it in another. However, you may write an include directive within a conditional, provided you do not attempt to terminate the conditional inside the included file.

Conditionals that Test Flags

You can write a conditional that tests make command flags such as -t by using the variable MAKEFLAGS together with the findstring function (see section Functions for String Substitution and Analysis). This is useful when touch is not enough to make a file appear up to date.

The findstring function determines whether one string appears as a substring of another. If you want to test for the -t flag, use t as the first string and the value of MAKEFLAGS as the other.

For example, here is how to arrange to use ranlib -t to finish marking an archive file up to date:

archive.a: …ifneq (,$(findstring t,$(MAKEFLAGS))) +touch archive.a +ranlib -t archive.aelse ranlib archive.aendif

The + prefix marks those command lines as “recursive” so that they will be executed despite use of the -t flag. See section Recursive Use of make.

Functions for Transforming Text

 

Functions allow you to do text processing in the makefile to compute the files to operate on or the commands to use. You use a function in a function call, where you give the name of the function and some text (the arguments) for the function to operate on. The result of the function’s processing is substituted into the makefile at the point of the call, just as a variable might be substituted.

Function Call Syntax

 

A function call resembles a variable reference. It looks like this:

$(function arguments)

or like this:

${function arguments}

Here function is a function name; one of a short list of names that are part of make. You can also essentially create your own functions by using the call builtin function.

The arguments are the arguments of the function. They are separated from the function name by one or more spaces or tabs, and if there is more than one argument, then they are separated by commas. Such whitespace and commas are not part of an argument’s value. The delimiters which you use to surround the function call, whether parentheses or braces, can appear in an argument only in matching pairs; the other kind of delimiters may appear singly. If the arguments themselves contain other function calls or variable references, it is wisest to use the same kind of delimiters for all the references; write $(subst a,b,$(x)), not $(subst a,b,${x}). This is because it is clearer, and because only one type of delimiter is matched to find the end of the reference.

The text written for each argument is processed by substitution of variables and function calls to produce the argument value, which is the text on which the function acts. The substitution is done in the order in which the arguments appear.

Commas and unmatched parentheses or braces cannot appear in the text of an argument as written; leading spaces cannot appear in the text of the first argument as written. These characters can be put into the argument value by variable substitution. First define variables comma and space whose values are isolated comma and space characters, then substitute these variables where such characters are wanted, like this:

comma:= ,empty:=space:= $(empty) $(empty)foo:= a b c bar:= $(subst $(space),$(comma),$(foo))# bar is now `a,b,c’.

Here the subst function replaces each space with a comma, through the value of foo, and substitutes the result.

Functions for String Substitution and Analysis

 

Here are some functions that operate on strings:

$(subst from,to,text)

Performs a textual replacement on the text text: each occurrence of from is replaced by to. The result is substituted for the function call. For example,

$(subst ee,EE,feet on the street)

substitutes the string fEEt on the strEEt.

$(patsubst pattern,replacement,text)

Finds whitespace-separated words in text that match pattern and replaces them with replacement. Here pattern may contain a % which acts as a wildcard, matching any number of any characters within a word. If replacement also contains a %, the % is replaced by the text that matched the % in pattern. % characters in patsubst function invocations can be quoted with preceding backslashes (\&grave;). Backslashes that would otherwise quote %characters can be quoted with more backslashes. Backslashes that quote%characters or other backslashes are removed from the pattern before it is compared file names or has a stem substituted into it. Backslashes that are not in danger of quoting%characters go unmolested. For example, the patternthe\%weird\\%pattern\&grave; has the%weird\&grave; preceding the operative %character, andpattern\&grave; following it. The final two backslashes are left alone because they cannot affect any % character. Whitespace between words is folded into single space characters; leading and trailing whitespace is discarded. For example,

$(patsubst %.c,%.o,x.c.c bar.c)

produces the value x.c.o bar.o. Substitution references (see section Substitution References) are a simpler way to get the effect of the patsubst function:

$(var:pattern=replacement)

is equivalent to

$(patsubst pattern,replacement,$(var))

The second shorthand simplifies one of the most common uses of patsubst: replacing the suffix at the end of file names.

$(var:suffix=replacement)

is equivalent to

$(patsubst %suffix,%replacement,$(var))

For example, you might have a list of object files:

objects = foo.o bar.o baz.o

To get the list of corresponding source files, you could simply write:

$(objects:.o=.c)

instead of using the general form:

$(patsubst %.o,%.c,$(objects))

$(strip string)

Removes leading and trailing whitespace from string and replaces each internal sequence of one or more whitespace characters with a single space. Thus, $(strip a b c ) results in a b c. The function strip can be very useful when used in conjunction with conditionals. When comparing something with the empty string `` using ifeq or ifneq, you usually want a string of just whitespace to match the empty string (see section Conditional Parts of Makefiles). Thus, the following may fail to have the desired results:

.PHONY: allifneq “$(needs_made)” ““all: $(needs_made)else all:;@echo ‘Nothing to make!’endif

Replacing the variable reference $(needs\_made) with the function call $(strip $(needs\_made)) in the ifneq directive would make it more robust.

$(findstring find,in)

Searches in for an occurrence of find. If it occurs, the value is find; otherwise, the value is empty. You can use this function in a conditional to test for the presence of a specific substring in a given string. Thus, the two examples,

$(findstring a,a b c)$(findstring a,b c)

produce the values a and `` (the empty string), respectively. See section Conditionals that Test Flags, for a practical application of findstring.

$(filter pattern…,text)

Returns all whitespace-separated words in text that do match any of the pattern words, removing any words that do not match. The patterns are written using %, just like the patterns used in the patsubst function above. The filter function can be used to separate out different types of strings (such as file names) in a variable. For example:

sources := foo.c bar.c baz.s ugh.hfoo: $(sources) cc $(filter %.c %.s,$(sources)) -o foo

says that foo depends of foo.c, bar.c, baz.s and ugh.h but only foo.c, bar.c and baz.s should be specified in the command to the compiler.

$(filter-out pattern…,text)

Returns all whitespace-separated words in text that do not match any of the pattern words, removing the words that do match one or more. This is the exact opposite of the filter function. Removes all whitespace-separated words in text that do match the pattern words, returning only the words that do not match. This is the exact opposite of the filter function. For example, given:

objects=main1.o foo.o main2.o bar.omains=main1.o main2.o

the following generates a list which contains all the object files not in mains:

$(filter-out $(mains),$(objects))

$(sort list)

Sorts the words of list in lexical order, removing duplicate words. The output is a list of words separated by single spaces. Thus,

$(sort foo bar lose)

returns the value bar foo lose. Incidentally, since sort removes duplicate words, you can use it for this purpose even if you don’t care about the sort order.

Here is a realistic example of the use of subst and patsubst. Suppose that a makefile uses the VPATH variable to specify a list of directories that make should search for prerequisite files (see section VPATH: Search Path for All Prerequisites). This example shows how to tell the C compiler to search for header files in the same list of directories.

The value of VPATH is a list of directories separated by colons, such as src:../headers. First, the subst function is used to change the colons to spaces:

$(subst :, ,$(VPATH))

This produces src ../headers. Then patsubst is used to turn each directory name into a -I flag. These can be added to the value of the variable CFLAGS, which is passed automatically to the C compiler, like this:

override CFLAGS += $(patsubst %,-I%,$(subst :, ,$(VPATH)))

The effect is to append the text -Isrc -I../headers to the previously given value of CFLAGS. The override directive is used so that the new value is assigned even if the previous value of CFLAGS was specified with a command argument (see section The override Directive).

Functions for File Names

 

Several of the built-in expansion functions relate specifically to taking apart file names or lists of file names.

Each of the following functions performs a specific transformation on a file name. The argument of the function is regarded as a series of file names, separated by whitespace. (Leading and trailing whitespace is ignored.) Each file name in the series is transformed in the same way and the results are concatenated with single spaces between them.

$(dir names…)

Extracts the directory-part of each file name in names. The directory-part of the file name is everything up through (and including) the last slash in it. If the file name contains no slash, the directory part is the string ./. For example,

$(dir src/foo.c hacks)

produces the result src/ ./.

$(notdir names…)

Extracts all but the directory-part of each file name in names. If the file name contains no slash, it is left unchanged. Otherwise, everything through the last slash is removed from it. A file name that ends with a slash becomes an empty string. This is unfortunate, because it means that the result does not always have the same number of whitespace-separated file names as the argument had; but we do not see any other valid alternative. For example,

$(notdir src/foo.c hacks)

produces the result foo.c hacks.

$(suffix names…)

Extracts the suffix of each file name in names. If the file name contains a period, the suffix is everything starting with the last period. Otherwise, the suffix is the empty string. This frequently means that the result will be empty when names is not, and if names contains multiple file names, the result may contain fewer file names. For example,

$(suffix src/foo.c src-1.0/bar.c hacks)

produces the result .c .c.

$(basename names…)

Extracts all but the suffix of each file name in names. If the file name contains a period, the basename is everything starting up to (and not including) the last period. Periods in the directory part are ignored. If there is no period, the basename is the entire file name. For example,

$(basename src/foo.c src-1.0/bar hacks)

produces the result src/foo src-1.0/bar hacks.

$(addsuffix suffix,names…)

The argument names is regarded as a series of names, separated by whitespace; suffix is used as a unit. The value of suffix is appended to the end of each individual name and the resulting larger names are concatenated with single spaces between them. For example,

$(addsuffix .c,foo bar)

produces the result foo.c bar.c.

$(addprefix prefix,names…)

The argument names is regarded as a series of names, separated by whitespace; prefix is used as a unit. The value of prefix is prepended to the front of each individual name and the resulting larger names are concatenated with single spaces between them. For example,

$(addprefix src/,foo bar)

produces the result src/foo src/bar.

$(join list1,list2)

Concatenates the two arguments word by word: the two first words (one from each argument) concatenated form the first word of the result, the two second words form the second word of the result, and so on. So the nth word of the result comes from the nth word of each argument. If one argument has more words that the other, the extra words are copied unchanged into the result. For example, $(join a b,.c .o) produces a.c b.o. Whitespace between the words in the lists is not preserved; it is replaced with a single space. This function can merge the results of the dir and notdir functions, to produce the original list of files which was given to those two functions.

$(word n,text)

Returns the nth word of text. The legitimate values of n start from 1. If n is bigger than the number of words in text, the value is empty. For example,

$(word 2, foo bar baz)

returns bar.

$(wordlist s,e,text)

Returns the list of words in text starting with word s and ending with word e (inclusive). The legitimate values of s and e start from 1. If s is bigger than the number of words in text, the value is empty. If e is bigger than the number of words in text, words up to the end of text are returned. If s is greater than e, nothing is returned. For example,

$(wordlist 2, 3, foo bar baz)

returns bar baz.

$(words text)

Returns the number of words in text. Thus, the last word of text is $(word $(words text),text).

$(firstword names…)

The argument names is regarded as a series of names, separated by whitespace. The value is the first name in the series. The rest of the names are ignored. For example,

$(firstword foo bar)

produces the result foo. Although $(firstword text) is the same as $(word 1,text), the firstword function is retained for its simplicity.

$(wildcard pattern)

The argument pattern is a file name pattern, typically containing wildcard characters (as in shell file name patterns). The result of wildcard is a space-separated list of the names of existing files that match the pattern. See section Using Wildcard Characters in File Names.

The foreach Function

 

The foreach function is very different from other functions. It causes one piece of text to be used repeatedly, each time with a different substitution performed on it. It resembles the for command in the shell sh and the foreach command in the C-shell csh.

The syntax of the foreach function is:

$(foreach var,list,text)

The first two arguments, var and list, are expanded before anything else is done; note that the last argument, text, is not expanded at the same time. Then for each word of the expanded value of list, the variable named by the expanded value of var is set to that word, and text is expanded. Presumably text contains references to that variable, so its expansion will be different each time.

The result is that text is expanded as many times as there are whitespace-separated words in list. The multiple expansions of text are concatenated, with spaces between them, to make the result of foreach.

This simple example sets the variable files to the list of all files in the directories in the list dirs:

dirs := a b c dfiles := $(foreach dir,$(dirs),$(wildcard $(dir)/*))

Here text is $(wildcard $(dir)/\*). The first repetition finds the value a for dir, so it produces the same result as $(wildcard a/\*); the second repetition produces the result of $(wildcard b/\*); and the third, that of $(wildcard c/\*).

This example has the same result (except for setting dirs) as the following example:

files := $(wildcard a/* b/* c/* d/*)

When text is complicated, you can improve readability by giving it a name, with an additional variable:

find_files = $(wildcard $(dir)/*)dirs := a b c d files := $(foreach dir,$(dirs),$(find_files))

Here we use the variable find_files this way. We use plain = to define a recursively-expanding variable, so that its value contains an actual function call to be reexpanded under the control of foreach; a simply-expanded variable would not do, since wildcard would be called only once at the time of defining find_files.

The foreach function has no permanent effect on the variable var; its value and flavor after the foreach function call are the same as they were beforehand. The other values which are taken from list are in effect only temporarily, during the execution of foreach. The variable var is a simply-expanded variable during the execution of foreach. If var was undefined before the foreach function call, it is undefined after the call. See section The Two Flavors of Variables.

You must take care when using complex variable expressions that result in variable names because many strange things are valid variable names, but are probably not what you intended. For example,

files := $(foreach Esta escrito en espanol!,b c ch,$(find_files))

might be useful if the value of find_files references the variable whose name is Esta escrito en espanol! (es un nombre bastante largo, no?), but it is more likely to be a mistake.

The if Function

 

The if function provides support for conditional expansion in a functional context (as opposed to the GNU make makefile conditionals such as ifeq (see section Syntax of Conditionals).

An if function call can contain either two or three arguments:

$(if condition,then-part[,else-part])

The first argument, condition, first has all preceding and trailing whitespace stripped, then is expanded. If it expands to any non-empty string, then the condition is considered to be true. If it expands to an empty string, the condition is considered to be false.

If the condition is true then the second argument, then-part, is evaluated and this is used as the result of the evaluation of the entire if function.

If the condition is false then the third argument, else-part, is evaluated and this is the result of the if function. If there is no third argument, the if function evaluates to nothing (the empty string).

Note that only one of the then-part or the else-part will be evaluated, never both. Thus, either can contain side-effects (such as shell function calls, etc.)

The call Function

 

The call function is unique in that it can be used to create new parameterized functions. You can write a complex expression as the value of a variable, then use call to expand it with different values.

The syntax of the call function is:

$(call variable,param,param,…)

When make expands this function, it assigns each param to temporary variables $(1), $(2), etc. The variable $(0) will contain variable. There is no maximum number of parameter arguments. There is no minimum, either, but it doesn’t make sense to use call with no parameters.

Then variable is expanded as a make variable in the context of these temporary assignments. Thus, any reference to $(1) in the value of variable will resolve to the first param in the invocation of call.

Note that variable is the name of a variable, not a reference to that variable. Therefore you would not normally use a $ or parentheses when writing it. (You can, however, use a variable reference in the name if you want the name not to be a constant.)

If variable is the name of a builtin function, the builtin function is always invoked (even if a make variable by that name also exists).

The call function expands the param arguments before assigning them to temporary variables. This means that variable values containing references to builtin functions that have special expansion rules, like foreach or if, may not work as you expect.

Some examples may make this clearer.

This macro simply reverses its arguments:

reverse = $(2) $(1)foo = $(call reverse,a,b)

Here foo will contain b a.

This one is slightly more interesting: it defines a macro to search for the first instance of a program in PATH:

pathsearch = $(firstword $(wildcard $(addsufix /$(1),$(subst :, ,$(PATH))))) LS := $(call pathsearch,ls)

Now the variable LS contains /bin/ls or similar.

The call function can be nested. Each recursive invocation gets its own local values for $(1), etc. that mask the values of higher-level call. For example, here is an implementation of a map function:

map = $(foreach a,$(2),$(call $(1),$(a)))

Now you can map a function that normally takes only one argument, such as origin, to multiple values in one step:

o = $(call map,origin,o map MAKE)

and end up with o containing something like file file default.

A final caution: be careful when adding whitespace to the arguments to call. As with other functions, any whitespace contained in the second and subsequent arguments is kept; this can cause strange effects. It’s generally safest to remove all extraneous whitespace when providing parameters to call.

The origin Function

 

The origin function is unlike most other functions in that it does not operate on the values of variables; it tells you something about a variable. Specifically, it tells you where it came from.

The syntax of the origin function is:

$(origin variable)

Note that variable is the name of a variable to inquire about; not a reference to that variable. Therefore you would not normally use a $ or parentheses when writing it. (You can, however, use a variable reference in the name if you want the name not to be a constant.)

The result of this function is a string telling you how the variable variable was defined:

undefined

if variable was never defined.

default

if variable has a default definition, as is usual with CC and so on. See section Variables Used by Implicit Rules. Note that if you have redefined a default variable, the origin function will return the origin of the later definition.

environment

if variable was defined as an environment variable and the -e option is not turned on (see section Summary of Options).

environment override

if variable was defined as an environment variable and the -e option is turned on (see section Summary of Options).

file

if variable was defined in a makefile.

command line

if variable was defined on the command line.

override

if variable was defined with an override directive in a makefile (see section The override Directive).

automatic

if variable is an automatic variable defined for the execution of the commands for each rule (see section Automatic Variables).

This information is primarily useful (other than for your curiosity) to determine if you want to believe the value of a variable. For example, suppose you have a makefile foo that includes another makefile bar. You want a variable bletch to be defined in bar if you run the command make -f bar, even if the environment contains a definition of bletch. However, if foo defined bletch before including bar, you do not want to override that definition. This could be done by using an override directive in foo, giving that definition precedence over the later definition in bar; unfortunately, the override directive would also override any command line definitions. So, bar could include:

ifdef bletchifeq “$(origin bletch)” “environment"bletch = barf, gag, etc.endif endif

If bletch has been defined from the environment, this will redefine it.

If you want to override a previous definition of bletch if it came from the environment, even under -e, you could instead write:

ifneq “$(findstring environment,$(origin bletch))” ““bletch = barf, gag, etc. endif

Here the redefinition takes place if $(origin bletch) returns either environment or environment override. See section Functions for String Substitution and Analysis.

The shell Function

 

The shell function is unlike any other function except the wildcard function (see section The Function wildcard) in that it communicates with the world outside of make.

The shell function performs the same function that backquotes (&grave;) perform in most shells: it does command expansion. This means that it takes an argument that is a shell command and returns the output of the command. The only processing make does on the result, before substituting it into the surrounding text, is to convert each newline or carriage-return / newline pair to a single space. It also removes the trailing (carriage-return and) newline, if it’s the last thing in the result.

The commands run by calls to the shell function are run when the function calls are expanded. In most cases, this is when the makefile is read in. The exception is that function calls in the commands of the rules are expanded when the commands are run, and this applies to shell function calls like all others.

Here are some examples of the use of the shell function:

contents := $(shell cat foo)

sets contents to the contents of the file foo, with a space (rather than a newline) separating each line.

files := $(shell echo *.c)

sets files to the expansion of \*.c. Unless make is using a very strange shell, this has the same result as $(wildcard \*.c).

Functions That Control Make

 

These functions control the way make runs. Generally, they are used to provide information to the user of the makefile or to cause make to stop if some sort of environmental error is detected.

$(error text…)

Generates a fatal error where the message is text. Note that the error is generated whenever this function is evaluated. So, if you put it inside a command script or on the right side of a recursive variable assignment, it won’t be evaluated until later. The text will be expanded before the error is generated. For example,

ifdef ERROR1$(error error is $(ERROR1))endif

will generate a fatal error during the read of the makefile if the make variable ERROR1 is defined. Or,

ERR = $(error found an error!).PHONY: errerr: ; $(ERR)

will generate a fatal error while make is running, if the err target is invoked.

$(warning text…)

This function works similarly to the error function, above, except that make doesn’t exit. Instead, text is expanded and the resulting message is displayed, but processing of the makefile continues. The result of the expansion of this function is the empty string.

How to Run make

A makefile that says how to recompile a program can be used in more than one way. The simplest use is to recompile every file that is out of date. Usually, makefiles are written so that if you run make with no arguments, it does just that.

But you might want to update only some of the files; you might want to use a different compiler or different compiler options; you might want just to find out which files are out of date without changing them.

By giving arguments when you run make, you can do any of these things and many others.

The exit status of make is always one of three values:

0

The exit status is zero if make is successful.

2

The exit status is two if make encounters any errors. It will print messages describing the particular errors.

1

The exit status is one if you use the -q flag and make determines that some target is not already up to date. See section Instead of Executing the Commands.

Arguments to Specify the Makefile

 

The way to specify the name of the makefile is with the -f or --file option (--makefile also works). For example, -f altmake says to use the file altmake as the makefile.

If you use the -f flag several times and follow each -f with an argument, all the specified files are used jointly as makefiles.

If you do not use the -f or --file flag, the default is to try GNUmakefile, makefile, and Makefile, in that order, and use the first of these three which exists or can be made (see section Writing Makefiles).

Arguments to Specify the Goals

 

The goals are the targets that make should strive ultimately to update. Other targets are updated as well if they appear as prerequisites of goals, or prerequisites of prerequisites of goals, etc.

By default, the goal is the first target in the makefile (not counting targets that start with a period). Therefore, makefiles are usually written so that the first target is for compiling the entire program or programs they describe. If the first rule in the makefile has several targets, only the first target in the rule becomes the default goal, not the whole list.

You can specify a different goal or goals with arguments to make. Use the name of the goal as an argument. If you specify several goals, make processes each of them in turn, in the order you name them.

Any target in the makefile may be specified as a goal (unless it starts with - or contains an =, in which case it will be parsed as a switch or variable definition, respectively). Even targets not in the makefile may be specified, if make can find implicit rules that say how to make them.

Make will set the special variable MAKECMDGOALS to the list of goals you specified on the command line. If no goals were given on the command line, this variable is empty. Note that this variable should be used only in special circumstances.

An example of appropriate use is to avoid including .d files during clean rules (see section Generating Prerequisites Automatically), so make won’t create them only to immediately remove them again:

sources = foo.c bar.cifneq ($(MAKECMDGOALS),clean)include $(sources:.c=.d)endif

One use of specifying a goal is if you want to compile only a part of the program, or only one of several programs. Specify as a goal each file that you wish to remake. For example, consider a directory containing several programs, with a makefile that starts like this:

.PHONY: allall: size nm ld ar as

If you are working on the program size, you might want to say make size so that only the files of that program are recompiled.

Another use of specifying a goal is to make files that are not normally made. For example, there may be a file of debugging output, or a version of the program that is compiled specially for testing, which has a rule in the makefile but is not a prerequisite of the default goal.

Another use of specifying a goal is to run the commands associated with a phony target (see section Phony Targets) or empty target (see section Empty Target Files to Record Events). Many makefiles contain a phony target named clean which deletes everything except source files. Naturally, this is done only if you request it explicitly with make clean. Following is a list of typical phony and empty target names. See section Standard Targets for Users, for a detailed list of all the standard target names which GNU software packages use.

all

Make all the top-level targets the makefile knows about.

clean

Delete all files that are normally created by running make.

mostlyclean

Like clean, but may refrain from deleting a few files that people normally don’t want to recompile. For example, the mostlyclean target for GCC does not delete libgcc.a, because recompiling it is rarely necessary and takes a lot of time.

distclean

realclean

clobber

Any of these targets might be defined to delete more files than clean does. For example, this would delete configuration files or links that you would normally create as preparation for compilation, even if the makefile itself cannot create these files.

install

Copy the executable file into a directory that users typically search for commands; copy any auxiliary files that the executable uses into the directories where it will look for them.

print

Print listings of the source files that have changed.

tar

Create a tar file of the source files.

shar

Create a shell archive (shar file) of the source files.

dist

Create a distribution file of the source files. This might be a tar file, or a shar file, or a compressed version of one of the above, or even more than one of the above.

TAGS

Update a tags table for this program.

check

test

Perform self tests on the program this makefile builds.

Instead of Executing the Commands

 

The makefile tells make how to tell whether a target is up to date, and how to update each target. But updating the targets is not always what you want. Certain options specify other activities for make.

-n

--just-print

--dry-run

--recon

"No-op”. The activity is to print what commands would be used to make the targets up to date, but not actually execute them.

-t

--touch

"Touch”. The activity is to mark the targets as up to date without actually changing them. In other words, make pretends to compile the targets but does not really change their contents.

-q

--question

"Question”. The activity is to find out silently whether the targets are up to date already; but execute no commands in either case. In other words, neither compilation nor output will occur.

-W file

--what-if=file

--assume-new=file

--new-file=file

"What if”. Each -W flag is followed by a file name. The given files’ modification times are recorded by make as being the present time, although the actual modification times remain the same. You can use the -W flag in conjunction with the -n flag to see what would happen if you were to modify specific files.

With the -n flag, make prints the commands that it would normally execute but does not execute them.

With the -t flag, make ignores the commands in the rules and uses (in effect) the command touch for each target that needs to be remade. The touch command is also printed, unless -s or .SILENT is used. For speed, make does not actually invoke the program touch. It does the work directly.

With the -q flag, make prints nothing and executes no commands, but the exit status code it returns is zero if and only if the targets to be considered are already up to date. If the exit status is one, then some updating needs to be done. If make encounters an error, the exit status is two, so you can distinguish an error from a target that is not up to date.

It is an error to use more than one of these three flags in the same invocation of make.

The -n, -t, and -q options do not affect command lines that begin with + characters or contain the strings $(MAKE) or ${MAKE}. Note that only the line containing the + character or the strings $(MAKE) or ${MAKE} is run regardless of these options. Other lines in the same rule are not run unless they too begin with + or contain $(MAKE) or ${MAKE} (See section How the MAKE Variable Works.)

The -W flag provides two features:

  • If you also use the -n or -q flag, you can see what make would do if you were to modify some files.
  • Without the -n or -q flag, when make is actually executing commands, the -W flag can direct make to act as if some files had been modified, without actually modifying the files.

Note that the options -p and -v allow you to obtain other information about make or about the makefiles in use (see section Summary of Options).

Avoiding Recompilation of Some Files

 

Sometimes you may have changed a source file but you do not want to recompile all the files that depend on it. For example, suppose you add a macro or a declaration to a header file that many other files depend on. Being conservative, make assumes that any change in the header file requires recompilation of all dependent files, but you know that they do not need to be recompiled and you would rather not waste the time waiting for them to compile.

If you anticipate the problem before changing the header file, you can use the -t flag. This flag tells make not to run the commands in the rules, but rather to mark the target up to date by changing its last-modification date. You would follow this procedure:

  1. Use the command make to recompile the source files that really need recompilation.
  2. Make the changes in the header files.
  3. Use the command make -t to mark all the object files as up to date. The next time you run make, the changes in the header files will not cause any recompilation.

If you have already changed the header file at a time when some files do need recompilation, it is too late to do this. Instead, you can use the -o file flag, which marks a specified file as “old” (see section Summary of Options). This means that the file itself will not be remade, and nothing else will be remade on its account. Follow this procedure:

  1. Recompile the source files that need compilation for reasons independent of the particular header file, with make -o headerfile. If several header files are involved, use a separate -o option for each header file.
  2. Touch all the object files with make -t.

Overriding Variables

 

An argument that contains = specifies the value of a variable: v\=x sets the value of the variable v to x. If you specify a value in this way, all ordinary assignments of the same variable in the makefile are ignored; we say they have been overridden by the command line argument.

The most common way to use this facility is to pass extra flags to compilers. For example, in a properly written makefile, the variable CFLAGS is included in each command that runs the C compiler, so a file foo.c would be compiled something like this:

cc -c $(CFLAGS) foo.c

Thus, whatever value you set for CFLAGS affects each compilation that occurs. The makefile probably specifies the usual value for CFLAGS, like this:

CFLAGS=-g

Each time you run make, you can override this value if you wish. For example, if you say make CFLAGS=-g -O'', each C compilation will be done with cc -c -g -O. (This illustrates how you can use quoting in the shell to enclose spaces and other special characters in the value of a variable when you override it.)

The variable CFLAGS is only one of many standard variables that exist just so that you can change them this way. See section Variables Used by Implicit Rules, for a complete list.

You can also program the makefile to look at additional variables of your own, giving the user the ability to control other aspects of how the makefile works by changing the variables.

When you override a variable with a command argument, you can define either a recursively-expanded variable or a simply-expanded variable. The examples shown above make a recursively-expanded variable; to make a simply-expanded variable, write := instead of =. But, unless you want to include a variable reference or function call in the value that you specify, it makes no difference which kind of variable you create.

There is one way that the makefile can change a variable that you have overridden. This is to use the override directive, which is a line that looks like this: override variable = value (see section The override Directive).

Testing the Compilation of a Program

 

Normally, when an error happens in executing a shell command, make gives up immediately, returning a nonzero status. No further commands are executed for any target. The error implies that the goal cannot be correctly remade, and make reports this as soon as it knows.

When you are compiling a program that you have just changed, this is not what you want. Instead, you would rather that make try compiling every file that can be tried, to show you as many compilation errors as possible.

On these occasions, you should use the -k or --keep-going flag. This tells make to continue to consider the other prerequisites of the pending targets, remaking them if necessary, before it gives up and returns nonzero status. For example, after an error in compiling one object file, make -k will continue compiling other object files even though it already knows that linking them will be impossible. In addition to continuing after failed shell commands, make -k will continue as much as possible after discovering that it does not know how to make a target or prerequisite file. This will always cause an error message, but without -k, it is a fatal error (see section Summary of Options).

The usual behavior of make assumes that your purpose is to get the goals up to date; once make learns that this is impossible, it might as well report the failure immediately. The -k flag says that the real purpose is to test as much as possible of the changes made in the program, perhaps to find several independent problems so that you can correct them all before the next attempt to compile. This is why Emacs’ M-x compile command passes the -k flag by default.

Summary of Options

 

Here is a table of all the options make understands:

-b

-m

These options are ignored for compatibility with other versions of make.

-C dir

--directory=dir

Change to directory dir before reading the makefiles. If multiple -C options are specified, each is interpreted relative to the previous one: -C / -C etc is equivalent to -C /etc. This is typically used with recursive invocations of make (see section Recursive Use of make).

-d

Print debugging information in addition to normal processing. The debugging information says which files are being considered for remaking, which file-times are being compared and with what results, which files actually need to be remade, which implicit rules are considered and which are applied–everything interesting about how make decides what to do. The -d option is equivalent to --debug=a (see below).

--debug\[=options\]

Print debugging information in addition to normal processing. Various levels and types of output can be chosen. With no arguments, print the “basic” level of debugging. Possible arguments are below; only the first character is considered, and values must be comma- or space-separated.

a (all)

All types of debugging output are enabled. This is equivalent to using -d.

b (basic)

Basic debugging prints each target that was found to be out-of-date, and whether the build was successful or not.

v (verbose)

A level above basic; includes messages about which makefiles were parsed, prerequisites that did not need to be rebuilt, etc. This option also enables basic messages.

i (implicit)

Prints messages describing the implicit rule searches for each target. This option also enables basic messages.

j (jobs)

Prints messages giving details on the invocation of specific subcommands.

m (makefile)

By default, the above messages are not enabled while trying to remake the makefiles. This option enables messages while rebuilding makefiles, too. Note that the all option does enable this option. This option also enables basic messages.

-e

--environment-overrides

Give variables taken from the environment precedence over variables from makefiles. See section Variables from the Environment.

-f file

--file=file

--makefile=file

Read the file named file as a makefile. See section Writing Makefiles.

-h

--help

Remind you of the options that make understands and then exit.

-i

--ignore-errors

Ignore all errors in commands executed to remake files. See section Errors in Commands.

-I dir

--include-dir=dir

Specifies a directory dir to search for included makefiles. See section Including Other Makefiles. If several -I options are used to specify several directories, the directories are searched in the order specified.

-j \[jobs\]

--jobs\[=jobs\]

Specifies the number of jobs (commands) to run simultaneously. With no argument, make runs as many jobs simultaneously as possible. If there is more than one -j option, the last one is effective. See section Parallel Execution, for more information on how commands are run. Note that this option is ignored on MS-DOS.

-k

--keep-going

Continue as much as possible after an error. While the target that failed, and those that depend on it, cannot be remade, the other prerequisites of these targets can be processed all the same. See section Testing the Compilation of a Program.

-l \[load\]

--load-average\[=load\]

--max-load\[=load\]

Specifies that no new jobs (commands) should be started if there are other jobs running and the load average is at least load (a floating-point number). With no argument, removes a previous load limit. See section Parallel Execution.

-n

--just-print

--dry-run

--recon

Print the commands that would be executed, but do not execute them. See section Instead of Executing the Commands.

-o file

--old-file=file

--assume-old=file

Do not remake the file file even if it is older than its prerequisites, and do not remake anything on account of changes in file. Essentially the file is treated as very old and its rules are ignored. See section Avoiding Recompilation of Some Files.

-p

--print-data-base

Print the data base (rules and variable values) that results from reading the makefiles; then execute as usual or as otherwise specified. This also prints the version information given by the -v switch (see below). To print the data base without trying to remake any files, use make -qp. To print the data base of predefined rules and variables, use make -p -f /dev/null. The data base output contains filename and linenumber information for command and variable definitions, so it can be a useful debugging tool in complex environments.

-q

--question

"Question mode”. Do not run any commands, or print anything; just return an exit status that is zero if the specified targets are already up to date, one if any remaking is required, or two if an error is encountered. See section Instead of Executing the Commands.

-r

--no-builtin-rules

Eliminate use of the built-in implicit rules (see section Using Implicit Rules). You can still define your own by writing pattern rules (see section Defining and Redefining Pattern Rules). The -r option also clears out the default list of suffixes for suffix rules (see section Old-Fashioned Suffix Rules). But you can still define your own suffixes with a rule for .SUFFIXES, and then define your own suffix rules. Note that only rules are affected by the -r option; default variables remain in effect (see section Variables Used by Implicit Rules); see the -R option below.

-R

--no-builtin-variables

Eliminate use of the built-in rule-specific variables (see section Variables Used by Implicit Rules). You can still define your own, of course. The -R option also automatically enables the -r option (see above), since it doesn’t make sense to have implicit rules without any definitions for the variables that they use.

-s

--silent

--quiet

Silent operation; do not print the commands as they are executed. See section Command Echoing.

-S

--no-keep-going

--stop

Cancel the effect of the -k option. This is never necessary except in a recursive make where -k might be inherited from the top-level make via MAKEFLAGS (see section Recursive Use of make) or if you set -k in MAKEFLAGS in your environment.

-t

--touch

Touch files (mark them up to date without really changing them) instead of running their commands. This is used to pretend that the commands were done, in order to fool future invocations of make. See section Instead of Executing the Commands.

-v

--version

Print the version of the make program plus a copyright, a list of authors, and a notice that there is no warranty; then exit.

-w

--print-directory

Print a message containing the working directory both before and after executing the makefile. This may be useful for tracking down errors from complicated nests of recursive make commands. See section Recursive Use of make. (In practice, you rarely need to specify this option since make does it for you; see section The --print-directory Option.)

--no-print-directory

Disable printing of the working directory under -w. This option is useful when -w is turned on automatically, but you do not want to see the extra messages. See section The --print-directory Option.

-W file

--what-if=file

--new-file=file

--assume-new=file

Pretend that the target file has just been modified. When used with the -n flag, this shows you what would happen if you were to modify that file. Without -n, it is almost the same as running a touch command on the given file before running make, except that the modification time is changed only in the imagination of make. See section Instead of Executing the Commands.

--warn-undefined-variables

Issue a warning message whenever make sees a reference to an undefined variable. This can be helpful when you are trying to debug makefiles which use variables in complex ways.

Using Implicit Rules

 

Certain standard ways of remaking target files are used very often. For example, one customary way to make an object file is from a C source file using the C compiler, cc.

Implicit rules tell make how to use customary techniques so that you do not have to specify them in detail when you want to use them. For example, there is an implicit rule for C compilation. File names determine which implicit rules are run. For example, C compilation typically takes a .c file and makes a .o file. So make applies the implicit rule for C compilation when it sees this combination of file name endings.

A chain of implicit rules can apply in sequence; for example, make will remake a .o file from a .y file by way of a .c file. See section Chains of Implicit Rules.

The built-in implicit rules use several variables in their commands so that, by changing the values of the variables, you can change the way the implicit rule works. For example, the variable CFLAGS controls the flags given to the C compiler by the implicit rule for C compilation. See section Variables Used by Implicit Rules.

You can define your own implicit rules by writing pattern rules. See section Defining and Redefining Pattern Rules.

Suffix rules are a more limited way to define implicit rules. Pattern rules are more general and clearer, but suffix rules are retained for compatibility. See section Old-Fashioned Suffix Rules.

Using Implicit Rules

 

To allow make to find a customary method for updating a target file, all you have to do is refrain from specifying commands yourself. Either write a rule with no command lines, or don’t write a rule at all. Then make will figure out which implicit rule to use based on which kind of source file exists or can be made.

For example, suppose the makefile looks like this:

foo : foo.o bar.o cc -o foo foo.o bar.o $(CFLAGS) $(LDFLAGS)

Because you mention foo.o but do not give a rule for it, make will automatically look for an implicit rule that tells how to update it. This happens whether or not the file foo.o currently exists.

If an implicit rule is found, it can supply both commands and one or more prerequisites (the source files). You would want to write a rule for foo.o with no command lines if you need to specify additional prerequisites, such as header files, that the implicit rule cannot supply.

Each implicit rule has a target pattern and prerequisite patterns. There may be many implicit rules with the same target pattern. For example, numerous rules make .o files: one, from a .c file with the C compiler; another, from a .p file with the Pascal compiler; and so on. The rule that actually applies is the one whose prerequisites exist or can be made. So, if you have a file foo.c, make will run the C compiler; otherwise, if you have a file foo.p, make will run the Pascal compiler; and so on.

Of course, when you write the makefile, you know which implicit rule you want make to use, and you know it will choose that one because you know which possible prerequisite files are supposed to exist. See section Catalogue of Implicit Rules, for a catalogue of all the predefined implicit rules.

Above, we said an implicit rule applies if the required prerequisites “exist or can be made”. A file “can be made” if it is mentioned explicitly in the makefile as a target or a prerequisite, or if an implicit rule can be recursively found for how to make it. When an implicit prerequisite is the result of another implicit rule, we say that chaining is occurring. See section Chains of Implicit Rules.

In general, make searches for an implicit rule for each target, and for each double-colon rule, that has no commands. A file that is mentioned only as a prerequisite is considered a target whose rule specifies nothing, so implicit rule search happens for it. See section Implicit Rule Search Algorithm, for the details of how the search is done.

Note that explicit prerequisites do not influence implicit rule search. For example, consider this explicit rule:

foo.o: foo.p

The prerequisite on foo.p does not necessarily mean that make will remake foo.o according to the implicit rule to make an object file, a .o file, from a Pascal source file, a .p file. For example, if foo.c also exists, the implicit rule to make an object file from a C source file is used instead, because it appears before the Pascal rule in the list of predefined implicit rules (see section Catalogue of Implicit Rules).

If you do not want an implicit rule to be used for a target that has no commands, you can give that target empty commands by writing a semicolon (see section Using Empty Commands).

Catalogue of Implicit Rules

 

Here is a catalogue of predefined implicit rules which are always available unless the makefile explicitly overrides or cancels them. See section Canceling Implicit Rules, for information on canceling or overriding an implicit rule. The -r or --no-builtin-rules option cancels all predefined rules.

Not all of these rules will always be defined, even when the -r option is not given. Many of the predefined implicit rules are implemented in make as suffix rules, so which ones will be defined depends on the suffix list (the list of prerequisites of the special target .SUFFIXES). The default suffix list is: .out, .a, .ln, .o, .c, .cc, .C, .p, .f, .F, .r, .y, .l, .s, .S, .mod, .sym, .def, .h, .info, .dvi, .tex, .texinfo, .texi, .txinfo, .w, .ch .web, .sh, .elc, .el. All of the implicit rules described below whose prerequisites have one of these suffixes are actually suffix rules. If you modify the suffix list, the only predefined suffix rules in effect will be those named by one or two of the suffixes that are on the list you specify; rules whose suffixes fail to be on the list are disabled. See section Old-Fashioned Suffix Rules, for full details on suffix rules.

Compiling C programs

n.o is made automatically from n.c with a command of the form $(CC) -c $(CPPFLAGS) $(CFLAGS).

Compiling C++ programs

n.o is made automatically from n.cc or n.C with a command of the form $(CXX) -c $(CPPFLAGS) $(CXXFLAGS). We encourage you to use the suffix .cc for C++ source files instead of .C.

Compiling Pascal programs

n.o is made automatically from n.p with the command $(PC) -c $(PFLAGS).

Compiling Fortran and Ratfor programs

n.o is made automatically from n.r, n.F or n.f by running the Fortran compiler. The precise command used is as follows:

.f

$(FC) -c $(FFLAGS).

.F

$(FC) -c $(FFLAGS) $(CPPFLAGS).

.r

$(FC) -c $(FFLAGS) $(RFLAGS).

Preprocessing Fortran and Ratfor programs

n.f is made automatically from n.r or n.F. This rule runs just the preprocessor to convert a Ratfor or preprocessable Fortran program into a strict Fortran program. The precise command used is as follows:

.F

$(FC) -F $(CPPFLAGS) $(FFLAGS).

.r

$(FC) -F $(FFLAGS) $(RFLAGS).

Compiling Modula-2 programs

n.sym is made from n.def with a command of the form $(M2C) $(M2FLAGS) $(DEFFLAGS). n.o is made from n.mod; the form is: $(M2C) $(M2FLAGS) $(MODFLAGS).

Assembling and preprocessing assembler programs

n.o is made automatically from n.s by running the assembler, as. The precise command is $(AS) $(ASFLAGS). n.s is made automatically from n.S by running the C preprocessor, cpp. The precise command is $(CPP) $(CPPFLAGS).

Linking a single object file

n is made automatically from n.o by running the linker (usually called ld) via the C compiler. The precise command used is $(CC) $(LDFLAGS) n.o $(LOADLIBES) $(LDLIBS). This rule does the right thing for a simple program with only one source file. It will also do the right thing if there are multiple object files (presumably coming from various other source files), one of which has a name matching that of the executable file. Thus,

x: y.o z.o

when x.c, y.c and z.c all exist will execute:

cc -c x.c -o x.occ -c y.c -o y.occ -c z.c -o z.occ x.o y.o z.o -o xrm -f x.o rm -f y.orm -f z.o

In more complicated cases, such as when there is no object file whose name derives from the executable file name, you must write an explicit command for linking. Each kind of file automatically made into .o object files will be automatically linked by using the compiler ($(CC), $(FC) or $(PC); the C compiler $(CC) is used to assemble .s files) without the -c option. This could be done by using the .o object files as intermediates, but it is faster to do the compiling and linking in one step, so that’s how it’s done.

Yacc for C programs

n.c is made automatically from n.y by running Yacc with the command $(YACC) $(YFLAGS).

Lex for C programs

n.c is made automatically from n.l by by running Lex. The actual command is $(LEX) $(LFLAGS).

Lex for Ratfor programs

n.r is made automatically from n.l by by running Lex. The actual command is $(LEX) $(LFLAGS). The convention of using the same suffix .l for all Lex files regardless of whether they produce C code or Ratfor code makes it impossible for make to determine automatically which of the two languages you are using in any particular case. If make is called upon to remake an object file from a .l file, it must guess which compiler to use. It will guess the C compiler, because that is more common. If you are using Ratfor, make sure make knows this by mentioning n.r in the makefile. Or, if you are using Ratfor exclusively, with no C files, remove .c from the list of implicit rule suffixes with:

.SUFFIXES:.SUFFIXES: .o .r .f .l …

Making Lint Libraries from C, Yacc, or Lex programs

n.ln is made from n.c by running lint. The precise command is $(LINT) $(LINTFLAGS) $(CPPFLAGS) -i. The same command is used on the C code produced from n.y or n.l.

TeX and Web

n.dvi is made from n.tex with the command $(TEX). n.tex is made from n.web with $(WEAVE), or from n.w (and from n.ch if it exists or can be made) with $(CWEAVE). n.p is made from n.web with $(TANGLE) and n.c is made from n.w (and from n.ch if it exists or can be made) with $(CTANGLE).

Texinfo and Info

n.dvi is made from n.texinfo, n.texi, or n.txinfo, with the command $(TEXI2DVI) $(TEXI2DVI\_FLAGS). n.info is made from n.texinfo, n.texi, or n.txinfo, with the command $(MAKEINFO) $(MAKEINFO\_FLAGS).

RCS

Any file n is extracted if necessary from an RCS file named either n,v or RCS/n,v. The precise command used is $(CO) $(COFLAGS). n will not be extracted from RCS if it already exists, even if the RCS file is newer. The rules for RCS are terminal (see section Match-Anything Pattern Rules), so RCS files cannot be generated from another source; they must actually exist.

SCCS

Any file n is extracted if necessary from an SCCS file named either s.n or SCCS/s.n. The precise command used is $(GET) $(GFLAGS). The rules for SCCS are terminal (see section Match-Anything Pattern Rules), so SCCS files cannot be generated from another source; they must actually exist. For the benefit of SCCS, a file n is copied from n.sh and made executable (by everyone). This is for shell scripts that are checked into SCCS. Since RCS preserves the execution permission of a file, you do not need to use this feature with RCS. We recommend that you avoid using of SCCS. RCS is widely held to be superior, and is also free. By choosing free software in place of comparable (or inferior) proprietary software, you support the free software movement.

Usually, you want to change only the variables listed in the table above, which are documented in the following section.

However, the commands in built-in implicit rules actually use variables such as COMPILE.c, LINK.p, and PREPROCESS.S, whose values contain the commands listed above.

make follows the convention that the rule to compile a .x source file uses the variable COMPILE.x. Similarly, the rule to produce an executable from a .x file uses LINK.x; and the rule to preprocess a .x file uses PREPROCESS.x.

Every rule that produces an object file uses the variable OUTPUT_OPTION. make defines this variable either to contain -o $@, or to be empty, depending on a compile-time option. You need the -o option to ensure that the output goes into the right file when the source file is in a different directory, as when using VPATH (see section Searching Directories for Prerequisites). However, compilers on some systems do not accept a -o switch for object files. If you use such a system, and use VPATH, some compilations will put their output in the wrong place. A possible workaround for this problem is to give OUTPUT_OPTION the value ; mv $\*.o $@.

Variables Used by Implicit Rules

 

The commands in built-in implicit rules make liberal use of certain predefined variables. You can alter these variables in the makefile, with arguments to make, or in the environment to alter how the implicit rules work without redefining the rules themselves. You can cancel all variables used by implicit rules with the -R or --no-builtin-variables option.

For example, the command used to compile a C source file actually says $(CC) -c $(CFLAGS) $(CPPFLAGS). The default values of the variables used are cc and nothing, resulting in the command cc -c. By redefining CC to ncc, you could cause ncc to be used for all C compilations performed by the implicit rule. By redefining CFLAGS to be -g, you could pass the -g option to each compilation. All implicit rules that do C compilation use $(CC) to get the program name for the compiler and all include $(CFLAGS) among the arguments given to the compiler.

The variables used in implicit rules fall into two classes: those that are names of programs (like CC) and those that contain arguments for the programs (like CFLAGS). (The “name of a program” may also contain some command arguments, but it must start with an actual executable program name.) If a variable value contains more than one argument, separate them with spaces.

Here is a table of variables used as names of programs in built-in rules:

AR

Archive-maintaining program; default ar.

AS

Program for doing assembly; default as.

CC

Program for compiling C programs; default cc.

CXX

Program for compiling C++ programs; default g++.

CO

Program for extracting a file from RCS; default co.

CPP

Program for running the C preprocessor, with results to standard output; default $(CC) -E.

FC

Program for compiling or preprocessing Fortran and Ratfor programs; default f77.

GET

Program for extracting a file from SCCS; default get.

LEX

Program to use to turn Lex grammars into C programs or Ratfor programs; default lex.

PC

Program for compiling Pascal programs; default pc.

YACC

Program to use to turn Yacc grammars into C programs; default yacc.

YACCR

Program to use to turn Yacc grammars into Ratfor programs; default yacc -r.

MAKEINFO

Program to convert a Texinfo source file into an Info file; default makeinfo.

TEX

Program to make TeX DVI files from TeX source; default tex.

TEXI2DVI

Program to make TeX DVI files from Texinfo source; default texi2dvi.

WEAVE

Program to translate Web into TeX; default weave.

CWEAVE

Program to translate C Web into TeX; default cweave.

TANGLE

Program to translate Web into Pascal; default tangle.

CTANGLE

Program to translate C Web into C; default ctangle.

RM

Command to remove a file; default rm -f.

Here is a table of variables whose values are additional arguments for the programs above. The default values for all of these is the empty string, unless otherwise noted.

ARFLAGS

Flags to give the archive-maintaining program; default rv.

ASFLAGS

Extra flags to give to the assembler (when explicitly invoked on a .s or .S file).

CFLAGS

Extra flags to give to the C compiler.

CXXFLAGS

Extra flags to give to the C++ compiler.

COFLAGS

Extra flags to give to the RCS co program.

CPPFLAGS

Extra flags to give to the C preprocessor and programs that use it (the C and Fortran compilers).

FFLAGS

Extra flags to give to the Fortran compiler.

GFLAGS

Extra flags to give to the SCCS get program.

LDFLAGS

Extra flags to give to compilers when they are supposed to invoke the linker, ld.

LFLAGS

Extra flags to give to Lex.

PFLAGS

Extra flags to give to the Pascal compiler.

RFLAGS

Extra flags to give to the Fortran compiler for Ratfor programs.

YFLAGS

Extra flags to give to Yacc.

Chains of Implicit Rules

Sometimes a file can be made by a sequence of implicit rules. For example, a file n.o could be made from n.y by running first Yacc and then cc. Such a sequence is called a chain.

If the file n.c exists, or is mentioned in the makefile, no special searching is required: make finds that the object file can be made by C compilation from n.c; later on, when considering how to make n.c, the rule for running Yacc is used. Ultimately both n.c and n.o are updated.

However, even if n.c does not exist and is not mentioned, make knows how to envision it as the missing link between n.o and n.y! In this case, n.c is called an intermediate file. Once make has decided to use the intermediate file, it is entered in the data base as if it had been mentioned in the makefile, along with the implicit rule that says how to create it.

Intermediate files are remade using their rules just like all other files. But intermediate files are treated differently in two ways.

The first difference is what happens if the intermediate file does not exist. If an ordinary file b does not exist, and make considers a target that depends on b, it invariably creates b and then updates the target from b. But if b is an intermediate file, then make can leave well enough alone. It won’t bother updating b, or the ultimate target, unless some prerequisite of b is newer than that target or there is some other reason to update that target.

The second difference is that if make does create b in order to update something else, it deletes b later on after it is no longer needed. Therefore, an intermediate file which did not exist before make also does not exist after make. make reports the deletion to you by printing a rm -f command showing which file it is deleting.

Ordinarily, a file cannot be intermediate if it is mentioned in the makefile as a target or prerequisite. However, you can explicitly mark a file as intermediate by listing it as a prerequisite of the special target .INTERMEDIATE. This takes effect even if the file is mentioned explicitly in some other way.

You can prevent automatic deletion of an intermediate file by marking it as a secondary file. To do this, list it as a prerequisite of the special target .SECONDARY. When a file is secondary, make will not create the file merely because it does not already exist, but make does not automatically delete the file. Marking a file as secondary also marks it as intermediate.

You can list the target pattern of an implicit rule (such as %.o) as a prerequisite of the special target .PRECIOUS to preserve intermediate files made by implicit rules whose target patterns match that file’s name; see section Interrupting or Killing make.

A chain can involve more than two implicit rules. For example, it is possible to make a file foo from RCS/foo.y,v by running RCS, Yacc and cc. Then both foo.y and foo.c are intermediate files that are deleted at the end.

No single implicit rule can appear more than once in a chain. This means that make will not even consider such a ridiculous thing as making foo from foo.o.o by running the linker twice. This constraint has the added benefit of preventing any infinite loop in the search for an implicit rule chain.

There are some special implicit rules to optimize certain cases that would otherwise be handled by rule chains. For example, making foo from foo.c could be handled by compiling and linking with separate chained rules, using foo.o as an intermediate file. But what actually happens is that a special rule for this case does the compilation and linking with a single cc command. The optimized rule is used in preference to the step-by-step chain because it comes earlier in the ordering of rules.

Defining and Redefining Pattern Rules

You define an implicit rule by writing a pattern rule. A pattern rule looks like an ordinary rule, except that its target contains the character % (exactly one of them). The target is considered a pattern for matching file names; the % can match any nonempty substring, while other characters match only themselves. The prerequisites likewise use % to show how their names relate to the target name.

Thus, a pattern rule %.o : %.c says how to make any file stem.o from another file stem.c.

Note that expansion using % in pattern rules occurs after any variable or function expansions, which take place when the makefile is read. See section How to Use Variables, and section Functions for Transforming Text.

Introduction to Pattern Rules

 

A pattern rule contains the character % (exactly one of them) in the target; otherwise, it looks exactly like an ordinary rule. The target is a pattern for matching file names; the % matches any nonempty substring, while other characters match only themselves.

For example, %.c as a pattern matches any file name that ends in .c. s.%.c as a pattern matches any file name that starts with s., ends in .c and is at least five characters long. (There must be at least one character to match the %.) The substring that the % matches is called the stem.

% in a prerequisite of a pattern rule stands for the same stem that was matched by the % in the target. In order for the pattern rule to apply, its target pattern must match the file name under consideration, and its prerequisite patterns must name files that exist or can be made. These files become prerequisites of the target.

Thus, a rule of the form

%.o : %.c ; command…

specifies how to make a file n.o, with another file n.c as its prerequisite, provided that n.c exists or can be made.

There may also be prerequisites that do not use %; such a prerequisite attaches to every file made by this pattern rule. These unvarying prerequisites are useful occasionally.

A pattern rule need not have any prerequisites that contain %, or in fact any prerequisites at all. Such a rule is effectively a general wildcard. It provides a way to make any file that matches the target pattern. See section Defining Last-Resort Default Rules.

Pattern rules may have more than one target. Unlike normal rules, this does not act as many different rules with the same prerequisites and commands. If a pattern rule has multiple targets, make knows that the rule’s commands are responsible for making all of the targets. The commands are executed only once to make all the targets. When searching for a pattern rule to match a target, the target patterns of a rule other than the one that matches the target in need of a rule are incidental: make worries only about giving commands and prerequisites to the file presently in question. However, when this file’s commands are run, the other targets are marked as having been updated themselves.

The order in which pattern rules appear in the makefile is important since this is the order in which they are considered. Of equally applicable rules, only the first one found is used. The rules you write take precedence over those that are built in. Note however, that a rule whose prerequisites actually exist or are mentioned always takes priority over a rule with prerequisites that must be made by chaining other implicit rules.

Pattern Rule Examples

Here are some examples of pattern rules actually predefined in make. First, the rule that compiles .c files into .o files:

%.o : %.c $(CC) -c $(CFLAGS) $(CPPFLAGS) $< -o $@

defines a rule that can make any file x.o from x.c. The command uses the automatic variables $@ and $\< to substitute the names of the target file and the source file in each case where the rule applies (see section Automatic Variables).

Here is a second built-in rule:

% :: RCS/%,v $(CO) $(COFLAGS) $<

defines a rule that can make any file x whatsoever from a corresponding file x,v in the subdirectory RCS. Since the target is %, this rule will apply to any file whatever, provided the appropriate prerequisite file exists. The double colon makes the rule terminal, which means that its prerequisite may not be an intermediate file (see section Match-Anything Pattern Rules).

This pattern rule has two targets:

%.tab.c %.tab.h: %.y bison -d $<

This tells make that the command bison -d x.y will make both x.tab.c and x.tab.h. If the file foo depends on the files parse.tab.o and scan.o and the file scan.o depends on the file parse.tab.h, when parse.y is changed, the command bison -d parse.y will be executed only once, and the prerequisites of both parse.tab.o and scan.o will be satisfied. (Presumably the file parse.tab.o will be recompiled from parse.tab.c and the file scan.o from scan.c, while foo is linked from parse.tab.o, scan.o, and its other prerequisites, and it will execute happily ever after.)

Automatic Variables

 

Suppose you are writing a pattern rule to compile a .c file into a .o file: how do you write the cc command so that it operates on the right source file name? You cannot write the name in the command, because the name is different each time the implicit rule is applied.

What you do is use a special feature of make, the automatic variables. These variables have values computed afresh for each rule that is executed, based on the target and prerequisites of the rule. In this example, you would use $@ for the object file name and $\< for the source file name.

Here is a table of automatic variables:

$@

The file name of the target of the rule. If the target is an archive member, then $@ is the name of the archive file. In a pattern rule that has multiple targets (see section Introduction to Pattern Rules), $@ is the name of whichever target caused the rule’s commands to be run.

$%

The target member name, when the target is an archive member. See section Using make to Update Archive Files. For example, if the target is foo.a(bar.o) then $% is bar.o and $@ is foo.a. $% is empty when the target is not an archive member.

$<

The name of the first prerequisite. If the target got its commands from an implicit rule, this will be the first prerequisite added by the implicit rule (see section Using Implicit Rules).

$?

The names of all the prerequisites that are newer than the target, with spaces between them. For prerequisites which are archive members, only the member named is used (see section Using make to Update Archive Files).

$^

The names of all the prerequisites, with spaces between them. For prerequisites which are archive members, only the member named is used (see section Using make to Update Archive Files). A target has only one prerequisite on each other file it depends on, no matter how many times each file is listed as a prerequisite. So if you list a prerequisite more than once for a target, the value of $^ contains just one copy of the name.

$+

This is like $^, but prerequisites listed more than once are duplicated in the order they were listed in the makefile. This is primarily useful for use in linking commands where it is meaningful to repeat library file names in a particular order.

$*

The stem with which an implicit rule matches (see section How Patterns Match). If the target is dir/a.foo.b and the target pattern is a.%.b then the stem is dir/foo. The stem is useful for constructing names of related files. In a static pattern rule, the stem is part of the file name that matched the % in the target pattern. In an explicit rule, there is no stem; so $\* cannot be determined in that way. Instead, if the target name ends with a recognized suffix (see section Old-Fashioned Suffix Rules), $\* is set to the target name minus the suffix. For example, if the target name is foo.c, then $\* is set to foo, since .c is a suffix. GNU make does this bizarre thing only for compatibility with other implementations of make. You should generally avoid using $\* except in implicit rules or static pattern rules. If the target name in an explicit rule does not end with a recognized suffix, $\* is set to the empty string for that rule.

$? is useful even in explicit rules when you wish to operate on only the prerequisites that have changed. For example, suppose that an archive named lib is supposed to contain copies of several object files. This rule copies just the changed object files into the archive:

lib: foo.o bar.o lose.o win.o ar r lib $?

Of the variables listed above, four have values that are single file names, and three have values that are lists of file names. These seven have variants that get just the file’s directory name or just the file name within the directory. The variant variables’ names are formed by appending D or F, respectively. These variants are semi-obsolete in GNU make since the functions dir and notdir can be used to get a similar effect (see section Functions for File Names). Note, however, that the F variants all omit the trailing slash which always appears in the output of the dir function. Here is a table of the variants:

$(@D)

The directory part of the file name of the target, with the trailing slash removed. If the value of $@ is dir/foo.o then $(@D) is dir. This value is . if $@ does not contain a slash.

$(@F)

The file-within-directory part of the file name of the target. If the value of $@ is dir/foo.o then $(@F) is foo.o. $(@F) is equivalent to $(notdir $@).

$(\*D)

$(\*F)

The directory part and the file-within-directory part of the stem; dir and foo in this example.

$(%D)

$(%F)

The directory part and the file-within-directory part of the target archive member name. This makes sense only for archive member targets of the form archive(member) and is useful only when member may contain a directory name. (See section Archive Members as Targets.)

$(\<D)

$(\<F)

The directory part and the file-within-directory part of the first prerequisite.

$(^D)

$(^F)

Lists of the directory parts and the file-within-directory parts of all prerequisites.

$(?D)

$(?F)

Lists of the directory parts and the file-within-directory parts of all prerequisites that are newer than the target.

Note that we use a special stylistic convention when we talk about these automatic variables; we write “the value of $\<”, rather than “the variable <” as we would write for ordinary variables such as objects and CFLAGS. We think this convention looks more natural in this special case. Please do not assume it has a deep significance; $\< refers to the variable named < just as $(CFLAGS) refers to the variable named CFLAGS. You could just as well use $(\<) in place of $\<.

How Patterns Match

A target pattern is composed of a % between a prefix and a suffix, either or both of which may be empty. The pattern matches a file name only if the file name starts with the prefix and ends with the suffix, without overlap. The text between the prefix and the suffix is called the stem. Thus, when the pattern %.o matches the file name test.o, the stem is test. The pattern rule prerequisites are turned into actual file names by substituting the stem for the character %. Thus, if in the same example one of the prerequisites is written as %.c, it expands to test.c.

When the target pattern does not contain a slash (and it usually does not), directory names in the file names are removed from the file name before it is compared with the target prefix and suffix. After the comparison of the file name to the target pattern, the directory names, along with the slash that ends them, are added on to the prerequisite file names generated from the pattern rule’s prerequisite patterns and the file name. The directories are ignored only for the purpose of finding an implicit rule to use, not in the application of that rule. Thus, e%t matches the file name src/eat, with src/a as the stem. When prerequisites are turned into file names, the directories from the stem are added at the front, while the rest of the stem is substituted for the %. The stem src/a with a prerequisite pattern c%r gives the file name src/car.

Match-Anything Pattern Rules

When a pattern rule’s target is just %, it matches any file name whatever. We call these rules match-anything rules. They are very useful, but it can take a lot of time for make to think about them, because it must consider every such rule for each file name listed either as a target or as a prerequisite.

Suppose the makefile mentions foo.c. For this target, make would have to consider making it by linking an object file foo.c.o, or by C compilation-and-linking in one step from foo.c.c, or by Pascal compilation-and-linking from foo.c.p, and many other possibilities.

We know these possibilities are ridiculous since foo.c is a C source file, not an executable. If make did consider these possibilities, it would ultimately reject them, because files such as foo.c.o and foo.c.p would not exist. But these possibilities are so numerous that make would run very slowly if it had to consider them.

To gain speed, we have put various constraints on the way make considers match-anything rules. There are two different constraints that can be applied, and each time you define a match-anything rule you must choose one or the other for that rule.

One choice is to mark the match-anything rule as terminal by defining it with a double colon. When a rule is terminal, it does not apply unless its prerequisites actually exist. Prerequisites that could be made with other implicit rules are not good enough. In other words, no further chaining is allowed beyond a terminal rule.

For example, the built-in implicit rules for extracting sources from RCS and SCCS files are terminal; as a result, if the file foo.c,v does not exist, make will not even consider trying to make it as an intermediate file from foo.c,v.o or from RCS/SCCS/s.foo.c,v. RCS and SCCS files are generally ultimate source files, which should not be remade from any other files; therefore, make can save time by not looking for ways to remake them.

If you do not mark the match-anything rule as terminal, then it is nonterminal. A nonterminal match-anything rule cannot apply to a file name that indicates a specific type of data. A file name indicates a specific type of data if some non-match-anything implicit rule target matches it.

For example, the file name foo.c matches the target for the pattern rule %.c : %.y (the rule to run Yacc). Regardless of whether this rule is actually applicable (which happens only if there is a file foo.y), the fact that its target matches is enough to prevent consideration of any nonterminal match-anything rules for the file foo.c. Thus, make will not even consider trying to make foo.c as an executable file from foo.c.o, foo.c.c, foo.c.p, etc.

The motivation for this constraint is that nonterminal match-anything rules are used for making files containing specific types of data (such as executable files) and a file name with a recognized suffix indicates some other specific type of data (such as a C source file).

Special built-in dummy pattern rules are provided solely to recognize certain file names so that nonterminal match-anything rules will not be considered. These dummy rules have no prerequisites and no commands, and they are ignored for all other purposes. For example, the built-in implicit rule

%.p :

exists to make sure that Pascal source files such as foo.p match a specific target pattern and thereby prevent time from being wasted looking for foo.p.o or foo.p.c.

Dummy pattern rules such as the one for %.p are made for every suffix listed as valid for use in suffix rules (see section Old-Fashioned Suffix Rules).

Canceling Implicit Rules

You can override a built-in implicit rule (or one you have defined yourself) by defining a new pattern rule with the same target and prerequisites, but different commands. When the new rule is defined, the built-in one is replaced. The new rule’s position in the sequence of implicit rules is determined by where you write the new rule.

You can cancel a built-in implicit rule by defining a pattern rule with the same target and prerequisites, but no commands. For example, the following would cancel the rule that runs the assembler:

%.o : %.s

Defining Last-Resort Default Rules

You can define a last-resort implicit rule by writing a terminal match-anything pattern rule with no prerequisites (see section Match-Anything Pattern Rules). This is just like any other pattern rule; the only thing special about it is that it will match any target. So such a rule’s commands are used for all targets and prerequisites that have no commands of their own and for which no other implicit rule applies.

For example, when testing a makefile, you might not care if the source files contain real data, only that they exist. Then you might do this:

%:: touch $@

to cause all the source files needed (as prerequisites) to be created automatically.

You can instead define commands to be used for targets for which there are no rules at all, even ones which don’t specify commands. You do this by writing a rule for the target .DEFAULT. Such a rule’s commands are used for all prerequisites which do not appear as targets in any explicit rule, and for which no implicit rule applies. Naturally, there is no .DEFAULT rule unless you write one.

If you use .DEFAULT with no commands or prerequisites:

.DEFAULT:

the commands previously stored for .DEFAULT are cleared. Then make acts as if you had never defined .DEFAULT at all.

If you do not want a target to get the commands from a match-anything pattern rule or .DEFAULT, but you also do not want any commands to be run for the target, you can give it empty commands (see section Using Empty Commands).

You can use a last-resort rule to override part of another makefile. See section Overriding Part of Another Makefile.

Old-Fashioned Suffix Rules

Suffix rules are the old-fashioned way of defining implicit rules for make. Suffix rules are obsolete because pattern rules are more general and clearer. They are supported in GNU make for compatibility with old makefiles. They come in two kinds: double-suffix and single-suffix.

A double-suffix rule is defined by a pair of suffixes: the target suffix and the source suffix. It matches any file whose name ends with the target suffix. The corresponding implicit prerequisite is made by replacing the target suffix with the source suffix in the file name. A two-suffix rule whose target and source suffixes are .o and .c is equivalent to the pattern rule %.o : %.c.

A single-suffix rule is defined by a single suffix, which is the source suffix. It matches any file name, and the corresponding implicit prerequisite name is made by appending the source suffix. A single-suffix rule whose source suffix is .c is equivalent to the pattern rule % : %.c.

Suffix rule definitions are recognized by comparing each rule’s target against a defined list of known suffixes. When make sees a rule whose target is a known suffix, this rule is considered a single-suffix rule. When make sees a rule whose target is two known suffixes concatenated, this rule is taken as a double-suffix rule.

For example, .c and .o are both on the default list of known suffixes. Therefore, if you define a rule whose target is .c.o, make takes it to be a double-suffix rule with source suffix .c and target suffix .o. Here is the old-fashioned way to define the rule for compiling a C source file:

.c.o: $(CC) -c $(CFLAGS) $(CPPFLAGS) -o $@ $<

Suffix rules cannot have any prerequisites of their own. If they have any, they are treated as normal files with funny names, not as suffix rules. Thus, the rule:

.c.o: foo.h $(CC) -c $(CFLAGS) $(CPPFLAGS) -o $@ $<

tells how to make the file .c.o from the prerequisite file foo.h, and is not at all like the pattern rule:

%.o: %.c foo.h $(CC) -c $(CFLAGS) $(CPPFLAGS) -o $@ $<

which tells how to make .o files from .c files, and makes all .o files using this pattern rule also depend on foo.h.

Suffix rules with no commands are also meaningless. They do not remove previous rules as do pattern rules with no commands (see section Canceling Implicit Rules). They simply enter the suffix or pair of suffixes concatenated as a target in the data base.

The known suffixes are simply the names of the prerequisites of the special target .SUFFIXES. You can add your own suffixes by writing a rule for .SUFFIXES that adds more prerequisites, as in:

.SUFFIXES: .hack .win

which adds .hack and .win to the end of the list of suffixes.

If you wish to eliminate the default known suffixes instead of just adding to them, write a rule for .SUFFIXES with no prerequisites. By special dispensation, this eliminates all existing prerequisites of .SUFFIXES. You can then write another rule to add the suffixes you want. For example,

.SUFFIXES: # Delete the default suffixes .SUFFIXES: .c .o .h # Define our suffix list

The -r or --no-builtin-rules flag causes the default list of suffixes to be empty.

The variable SUFFIXES is defined to the default list of suffixes before make reads any makefiles. You can change the list of suffixes with a rule for the special target .SUFFIXES, but that does not alter this variable.

Implicit Rule Search Algorithm

Here is the procedure make uses for searching for an implicit rule for a target t. This procedure is followed for each double-colon rule with no commands, for each target of ordinary rules none of which have commands, and for each prerequisite that is not the target of any rule. It is also followed recursively for prerequisites that come from implicit rules, in the search for a chain of rules.

Suffix rules are not mentioned in this algorithm because suffix rules are converted to equivalent pattern rules once the makefiles have been read in.

For an archive member target of the form archive(member), the following algorithm is run twice, first using the entire target name t, and second using (member) as the target t if the first run found no rule.

  1. Split t into a directory part, called d, and the rest, called n. For example, if t is src/foo.o, then d is src/ and n is foo.o.
  2. Make a list of all the pattern rules one of whose targets matches t or n. If the target pattern contains a slash, it is matched against t; otherwise, against n.
  3. If any rule in that list is not a match-anything rule, then remove all nonterminal match-anything rules from the list.
  4. Remove from the list all rules with no commands.
  5. For each pattern rule in the list:
    1. Find the stem s, which is the nonempty part of t or n matched by the % in the target pattern.
    2. Compute the prerequisite names by substituting s for %; if the target pattern does not contain a slash, append d to the front of each prerequisite name.
    3. Test whether all the prerequisites exist or ought to exist. (If a file name is mentioned in the makefile as a target or as an explicit prerequisite, then we say it ought to exist.) If all prerequisites exist or ought to exist, or there are no prerequisites, then this rule applies.
  6. If no pattern rule has been found so far, try harder. For each pattern rule in the list:
    1. If the rule is terminal, ignore it and go on to the next rule.
    2. Compute the prerequisite names as before.
    3. Test whether all the prerequisites exist or ought to exist.
    4. For each prerequisite that does not exist, follow this algorithm recursively to see if the prerequisite can be made by an implicit rule.
    5. If all prerequisites exist, ought to exist, or can be made by implicit rules, then this rule applies.
  7. If no implicit rule applies, the rule for .DEFAULT, if any, applies. In that case, give t the same commands that .DEFAULT has. Otherwise, there are no commands for t.

Once a rule that applies has been found, for each target pattern of the rule other than the one that matched t or n, the % in the pattern is replaced with s and the resultant file name is stored until the commands to remake the target file t are executed. After these commands are executed, each of these stored file names are entered into the data base and marked as having been updated and having the same update status as the file t.

When the commands of a pattern rule are executed for t, the automatic variables are set corresponding to the target and prerequisites. See section Automatic Variables.

Using make to Update Archive Files

Archive files are files containing named subfiles called members; they are maintained with the program ar and their main use is as subroutine libraries for linking.

Archive Members as Targets

An individual member of an archive file can be used as a target or prerequisite in make. You specify the member named member in archive file archive as follows:

archive(member)

This construct is available only in targets and prerequisites, not in commands! Most programs that you might use in commands do not support this syntax and cannot act directly on archive members. Only ar and other programs specifically designed to operate on archives can do so. Therefore, valid commands to update an archive member target probably must use ar. For example, this rule says to create a member hack.o in archive foolib by copying the file hack.o:

foolib(hack.o) : hack.o ar cr foolib hack.o

In fact, nearly all archive member targets are updated in just this way and there is an implicit rule to do it for you. Note: The c flag to ar is required if the archive file does not already exist.

To specify several members in the same archive, you can write all the member names together between the parentheses. For example:

foolib(hack.o kludge.o)

is equivalent to:

foolib(hack.o) foolib(kludge.o)

You can also use shell-style wildcards in an archive member reference. See section Using Wildcard Characters in File Names. For example, foolib(\*.o) expands to all existing members of the foolib archive whose names end in .o; perhaps foolib(hack.o) foolib(kludge.o).

Implicit Rule for Archive Member Targets

Recall that a target that looks like a(m) stands for the member named m in the archive file a.

When make looks for an implicit rule for such a target, as a special feature it considers implicit rules that match (m), as well as those that match the actual target a(m).

This causes one special rule whose target is (%) to match. This rule updates the target a(m) by copying the file m into the archive. For example, it will update the archive member target foo.a(bar.o) by copying the file bar.o into the archive foo.a as a member named bar.o.

When this rule is chained with others, the result is very powerful. Thus, make "foo.a(bar.o)" (the quotes are needed to protect the ( and ) from being interpreted specially by the shell) in the presence of a file bar.c is enough to cause the following commands to be run, even without a makefile:

cc -c bar.c -o bar.oar r foo.a bar.orm -f bar.o

Here make has envisioned the file bar.o as an intermediate file. See section Chains of Implicit Rules.

Implicit rules such as this one are written using the automatic variable $%. See section Automatic Variables.

An archive member name in an archive cannot contain a directory name, but it may be useful in a makefile to pretend that it does. If you write an archive member target foo.a(dir/file.o), make will perform automatic updating with this command:

ar r foo.a dir/file.o

which has the effect of copying the file dir/file.o into a member named file.o. In connection with such usage, the automatic variables %D and %F may be useful.

Updating Archive Symbol Directories

An archive file that is used as a library usually contains a special member named \_\_.SYMDEF that contains a directory of the external symbol names defined by all the other members. After you update any other members, you need to update \_\_.SYMDEF so that it will summarize the other members properly. This is done by running the ranlib program:

ranlib archivefile

Normally you would put this command in the rule for the archive file, and make all the members of the archive file prerequisites of that rule. For example,

libfoo.a: libfoo.a(x.o) libfoo.a(y.o) … ranlib libfoo.a

The effect of this is to update archive members x.o, y.o, etc., and then update the symbol directory member \_\_.SYMDEF by running ranlib. The rules for updating the members are not shown here; most likely you can omit them and use the implicit rule which copies files into the archive, as described in the preceding section.

This is not necessary when using the GNU ar program, which updates the \_\_.SYMDEF member automatically.

Dangers When Using Archives

It is important to be careful when using parallel execution (the -j switch; see section Parallel Execution) and archives. If multiple ar commands run at the same time on the same archive file, they will not know about each other and can corrupt the file.

Possibly a future version of make will provide a mechanism to circumvent this problem by serializing all commands that operate on the same archive file. But for the time being, you must either write your makefiles to avoid this problem in some other way, or not use -j.

Suffix Rules for Archive Files

You can write a special kind of suffix rule for dealing with archive files. See section Old-Fashioned Suffix Rules, for a full explanation of suffix rules. Archive suffix rules are obsolete in GNU make, because pattern rules for archives are a more general mechanism (see section Implicit Rule for Archive Member Targets). But they are retained for compatibility with other makes.

To write a suffix rule for archives, you simply write a suffix rule using the target suffix .a (the usual suffix for archive files). For example, here is the old-fashioned suffix rule to update a library archive from C source files:

.c.a: $(CC) $(CFLAGS) $(CPPFLAGS) -c $< -o $*.o $(AR) r $@ $*.o $(RM) $*.o

This works just as if you had written the pattern rule:

(%.o): %.c $(CC) $(CFLAGS) $(CPPFLAGS) -c $< -o $*.o $(AR) r $@ $*.o $(RM) $*.o

In fact, this is just what make does when it sees a suffix rule with .a as the target suffix. Any double-suffix rule .x.a is converted to a pattern rule with the target pattern (%.o) and a prerequisite pattern of %.x.

Since you might want to use .a as the suffix for some other kind of file, make also converts archive suffix rules to pattern rules in the normal way (see section Old-Fashioned Suffix Rules). Thus a double-suffix rule .x.a produces two pattern rules: (%.o): %.x and %.a: %.x.

Features of GNU make

Here is a summary of the features of GNU make, for comparison with and credit to other versions of make. We consider the features of make in 4.2 BSD systems as a baseline. If you are concerned with writing portable makefiles, you should not use the features of make listed here, nor the ones in section Incompatibilities and Missing Features.

Many features come from the version of make in System V.

  • The VPATH variable and its special meaning. See section Searching Directories for Prerequisites. This feature exists in System V make, but is undocumented. It is documented in 4.3 BSD make (which says it mimics System V’s VPATH feature).
  • Included makefiles. See section Including Other Makefiles. Allowing multiple files to be included with a single directive is a GNU extension.
  • Variables are read from and communicated via the environment. See section Variables from the Environment.
  • Options passed through the variable MAKEFLAGS to recursive invocations of make. See section Communicating Options to a Sub-make.
  • The automatic variable $% is set to the member name in an archive reference. See section Automatic Variables.
  • The automatic variables $@, $*, $<, $%, and $? have corresponding forms like $(@F) and $(@D). We have generalized this to $^ as an obvious extension. See section Automatic Variables.
  • Substitution variable references. See section Basics of Variable References.
  • The command-line options -b and -m, accepted and ignored. In System V make, these options actually do something.
  • Execution of recursive commands to run make via the variable MAKE even if -n, -q or -t is specified. See section Recursive Use of make.
  • Support for suffix .a in suffix rules. See section Suffix Rules for Archive Files. This feature is obsolete in GNU make, because the general feature of rule chaining (see section Chains of Implicit Rules) allows one pattern rule for installing members in an archive (see section Implicit Rule for Archive Member Targets) to be sufficient.
  • The arrangement of lines and backslash-newline combinations in commands is retained when the commands are printed, so they appear as they do in the makefile, except for the stripping of initial whitespace.

The following features were inspired by various other versions of make. In some cases it is unclear exactly which versions inspired which others.

  • Pattern rules using %. This has been implemented in several versions of make. We’re not sure who invented it first, but it’s been spread around a bit. See section Defining and Redefining Pattern Rules.
  • Rule chaining and implicit intermediate files. This was implemented by Stu Feldman in his version of make for AT&T Eighth Edition Research Unix, and later by Andrew Hume of AT&T Bell Labs in his mk program (where he terms it “transitive closure”). We do not really know if we got this from either of them or thought it up ourselves at the same time. See section Chains of Implicit Rules.
  • The automatic variable $^ containing a list of all prerequisites of the current target. We did not invent this, but we have no idea who did. See section Automatic Variables. The automatic variable $+ is a simple extension of $^.
  • The “what if” flag (-W in GNU make) was (as far as we know) invented by Andrew Hume in mk. See section Instead of Executing the Commands.
  • The concept of doing several things at once (parallelism) exists in many incarnations of make and similar programs, though not in the System V or BSD implementations. See section Command Execution.
  • Modified variable references using pattern substitution come from SunOS 4. See section Basics of Variable References. This functionality was provided in GNU make by the patsubst function before the alternate syntax was implemented for compatibility with SunOS 4. It is not altogether clear who inspired whom, since GNU make had patsubst before SunOS 4 was released.
  • The special significance of + characters preceding command lines (see section Instead of Executing the Commands) is mandated by IEEE Standard 1003.2-1992 (POSIX.2).
  • The += syntax to append to the value of a variable comes from SunOS 4 make. See section Appending More Text to Variables.
  • The syntax archive(mem1 mem2...) to list multiple members in a single archive file comes from SunOS 4 make. See section Archive Members as Targets.
  • The -include directive to include makefiles with no error for a nonexistent file comes from SunOS 4 make. (But note that SunOS 4 make does not allow multiple makefiles to be specified in one -include directive.) The same feature appears with the name sinclude in SGI make and perhaps others.

The remaining features are inventions new in GNU make:

  • Use the -v or --version option to print version and copyright information.
  • Use the -h or --help option to summarize the options to make.
  • Simply-expanded variables. See section The Two Flavors of Variables.
  • Pass command-line variable assignments automatically through the variable MAKE to recursive make invocations. See section Recursive Use of make.
  • Use the -C or --directory command option to change directory. See section Summary of Options.
  • Make verbatim variable definitions with define. See section Defining Variables Verbatim.
  • Declare phony targets with the special target .PHONY. Andrew Hume of AT&T Bell Labs implemented a similar feature with a different syntax in his mk program. This seems to be a case of parallel discovery. See section Phony Targets.
  • Manipulate text by calling functions. See section Functions for Transforming Text.
  • Use the -o or --old-file option to pretend a file’s modification-time is old. See section Avoiding Recompilation of Some Files.
  • Conditional execution. This feature has been implemented numerous times in various versions of make; it seems a natural extension derived from the features of the C preprocessor and similar macro languages and is not a revolutionary concept. See section Conditional Parts of Makefiles.
  • Specify a search path for included makefiles. See section Including Other Makefiles.
  • Specify extra makefiles to read with an environment variable. See section The Variable MAKEFILES.
  • Strip leading sequences of ./ from file names, so that ./file and file are considered to be the same file.
  • Use a special search method for library prerequisites written in the form -lname. See section Directory Search for Link Libraries.
  • Allow suffixes for suffix rules (see section Old-Fashioned Suffix Rules) to contain any characters. In other versions of make, they must begin with . and not contain any / characters.
  • Keep track of the current level of make recursion using the variable MAKELEVEL. See section Recursive Use of make.
  • Provide any goals given on the command line in the variable MAKECMDGOALS. See section Arguments to Specify the Goals.
  • Specify static pattern rules. See section Static Pattern Rules.
  • Provide selective vpath search. See section Searching Directories for Prerequisites.
  • Provide computed variable references. See section Basics of Variable References.
  • Update makefiles. See section How Makefiles Are Remade. System V make has a very, very limited form of this functionality in that it will check out SCCS files for makefiles.
  • Various new built-in implicit rules. See section Catalogue of Implicit Rules.
  • The built-in variable MAKE\_VERSION gives the version number of make.

Incompatibilities and Missing Features

The make programs in various other systems support a few features that are not implemented in GNU make. The POSIX.2 standard (IEEE Standard 1003.2-1992) which specifies make does not require any of these features.

  • A target of the form file((entry)) stands for a member of archive file file. The member is chosen, not by name, but by being an object file which defines the linker symbol entry. This feature was not put into GNU make because of the nonmodularity of putting knowledge into make of the internal format of archive file symbol tables. See section Updating Archive Symbol Directories.

  • Suffixes (used in suffix rules) that end with the character ~ have a special meaning to System V make; they refer to the SCCS file that corresponds to the file one would get without the ~. For example, the suffix rule .c~.o would make the file n.o from the SCCS file s.n.c. For complete coverage, a whole series of such suffix rules is required. See section Old-Fashioned Suffix Rules. In GNU make, this entire series of cases is handled by two pattern rules for extraction from SCCS, in combination with the general feature of rule chaining. See section Chains of Implicit Rules.

  • In System V make, the string $@ has the strange meaning that, in the prerequisites of a rule with multiple targets, it stands for the particular target that is being processed. This is not defined in GNU make because $ should always stand for an ordinary $. It is possible to get portions of this functionality through the use of static pattern rules (see section Static Pattern Rules). The System V make rule:

    $(targets): $$@.o lib.a

    can be replaced with the GNU make static pattern rule:

    $(targets): %: %.o lib.a

  • In System V and 4.3 BSD make, files found by VPATH search (see section Searching Directories for Prerequisites) have their names changed inside command strings. We feel it is much cleaner to always use automatic variables and thus make this feature obsolete.

  • In some Unix makes, the automatic variable $* appearing in the prerequisites of a rule has the amazingly strange “feature” of expanding to the full name of the target of that rule. We cannot imagine what went on in the minds of Unix make developers to do this; it is utterly inconsistent with the normal definition of $*.

  • In some Unix makes, implicit rule search (see section Using Implicit Rules) is apparently done for all targets, not just those without commands. This means you can do:

    foo.o: cc -c foo.c

    and Unix make will intuit that foo.o depends on foo.c. We feel that such usage is broken. The prerequisite properties of make are well-defined (for GNU make, at least), and doing such a thing simply does not fit the model.

  • GNU make does not include any built-in implicit rules for compiling or preprocessing EFL programs. If we hear of anyone who is using EFL, we will gladly add them.

  • It appears that in SVR4 make, a suffix rule can be specified with no commands, and it is treated as if it had empty commands (see section Using Empty Commands). For example:

    .c.a:

    will override the built-in .c.a suffix rule. We feel that it is cleaner for a rule without commands to always simply add to the prerequisite list for the target. The above example can be easily rewritten to get the desired behavior in GNU make:

    .c.a: ;

  • Some versions of make invoke the shell with the -e flag, except under -k (see section Testing the Compilation of a Program). The -e flag tells the shell to exit as soon as any program it runs returns a nonzero status. We feel it is cleaner to write each shell command line to stand on its own and not require this special treatment.

Makefile Conventions

This chapter describes conventions for writing the Makefiles for GNU programs. Using Automake will help you write a Makefile that follows these conventions.

General Conventions for Makefiles

Every Makefile should contain this line:

SHELL = /bin/sh

to avoid trouble on systems where the SHELL variable might be inherited from the environment. (This is never a problem with GNU make.)

Different make programs have incompatible suffix lists and implicit rules, and this sometimes creates confusion or misbehavior. So it is a good idea to set the suffix list explicitly using only the suffixes you need in the particular Makefile, like this:

.SUFFIXES:.SUFFIXES: .c .o

The first line clears out the suffix list, the second introduces all suffixes which may be subject to implicit rules in this Makefile.

Don’t assume that . is in the path for command execution. When you need to run programs that are a part of your package during the make, please make sure that it uses ./ if the program is built as part of the make or $(srcdir)/ if the file is an unchanging part of the source code. Without one of these prefixes, the current search path is used.

The distinction between ./ (the build directory) and $(srcdir)/ (the source directory) is important because users can build in a separate directory using the --srcdir option to configure. A rule of the form:

foo.1 : foo.man sedscript sed -e sedscript foo.man > foo.1

will fail when the build directory is not the source directory, because foo.man and sedscript are in the the source directory.

When using GNU make, relying on VPATH to find the source file will work in the case where there is a single dependency file, since the make automatic variable $\< will represent the source file wherever it is. (Many versions of make set $\< only in implicit rules.) A Makefile target like

foo.o : bar.c $(CC) -I. -I$(srcdir) $(CFLAGS) -c bar.c -o foo.o

should instead be written as

foo.o : bar.c $(CC) -I. -I$(srcdir) $(CFLAGS) -c $< -o $@

in order to allow VPATH to work correctly. When the target has multiple dependencies, using an explicit $(srcdir) is the easiest way to make the rule work well. For example, the target above for foo.1 is best written as:

foo.1 : foo.man sedscript sed -e $(srcdir)/sedscript $(srcdir)/foo.man > $@

GNU distributions usually contain some files which are not source files–for example, Info files, and the output from Autoconf, Automake, Bison or Flex. Since these files normally appear in the source directory, they should always appear in the source directory, not in the build directory. So Makefile rules to update them should put the updated files in the source directory.

However, if a file does not appear in the distribution, then the Makefile should not put it in the source directory, because building a program in ordinary circumstances should not modify the source directory in any way.

Try to make the build and installation targets, at least (and all their subtargets) work correctly with a parallel make.

Utilities in Makefiles

Write the Makefile commands (and any shell scripts, such as configure) to run in sh, not in csh. Don’t use any special features of ksh or bash.

The configure script and the Makefile rules for building and installation should not use any utilities directly except these:

cat cmp cp diff echo egrep expr false grep install-info ln ls mkdir mv pwd rm rmdir sed sleep sort tar test touch true

The compression program gzip can be used in the dist rule.

Stick to the generally supported options for these programs. For example, don’t use mkdir -p, convenient as it may be, because most systems don’t support it.

It is a good idea to avoid creating symbolic links in makefiles, since a few systems don’t support them.

The Makefile rules for building and installation can also use compilers and related programs, but should do so via make variables so that the user can substitute alternatives. Here are some of the programs we mean:

ar bison cc flex install ld ldconfig lexmake makeinfo ranlib texi2dvi yacc

Use the following make variables to run those programs:

$(AR) $(BISON) $(CC) $(FLEX) $(INSTALL) $(LD) $(LDCONFIG) $(LEX) $(MAKE) $(MAKEINFO) $(RANLIB) $(TEXI2DVI) $(YACC)

When you use ranlib or ldconfig, you should make sure nothing bad happens if the system does not have the program in question. Arrange to ignore an error from that command, and print a message before the command to tell the user that failure of this command does not mean a problem. (The Autoconf AC\_PROG\_RANLIB macro can help with this.)

If you use symbolic links, you should implement a fallback for systems that don’t have symbolic links.

Additional utilities that can be used via Make variables are:

chgrp chmod chown mknod

It is ok to use other utilities in Makefile portions (or scripts) intended only for particular systems where you know those utilities exist.

Variables for Specifying Commands

Makefiles should provide variables for overriding certain commands, options, and so on.

In particular, you should run most utility programs via variables. Thus, if you use Bison, have a variable named BISON whose default value is set with BISON = bison, and refer to it with $(BISON) whenever you need to use Bison.

File management utilities such as ln, rm, mv, and so on, need not be referred to through variables in this way, since users don’t need to replace them with other programs.

Each program-name variable should come with an options variable that is used to supply options to the program. Append FLAGS to the program-name variable name to get the options variable name–for example, BISONFLAGS. (The names CFLAGS for the C compiler, YFLAGS for yacc, and LFLAGS for lex, are exceptions to this rule, but we keep them because they are standard.) Use CPPFLAGS in any compilation command that runs the preprocessor, and use LDFLAGS in any compilation command that does linking as well as in any direct use of ld.

If there are C compiler options that must be used for proper compilation of certain files, do not include them in CFLAGS. Users expect to be able to specify CFLAGS freely themselves. Instead, arrange to pass the necessary options to the C compiler independently of CFLAGS, by writing them explicitly in the compilation commands or by defining an implicit rule, like this:

CFLAGS = -gALL_CFLAGS = -I. $(CFLAGS).c.o: $(CC) -c $(CPPFLAGS) $(ALL_CFLAGS) $<

Do include the -g option in CFLAGS, because that is not required for proper compilation. You can consider it a default that is only recommended. If the package is set up so that it is compiled with GCC by default, then you might as well include -O in the default value of CFLAGS as well.

Put CFLAGS last in the compilation command, after other variables containing compiler options, so the user can use CFLAGS to override the others.

CFLAGS should be used in every invocation of the C compiler, both those which do compilation and those which do linking.

Every Makefile should define the variable INSTALL, which is the basic command for installing a file into the system.

Every Makefile should also define the variables INSTALL_PROGRAM and INSTALL_DATA. (The default for each of these should be $(INSTALL).) Then it should use those variables as the commands for actual installation, for executables and nonexecutables respectively. Use these variables as follows:

$(INSTALL_PROGRAM) foo $(bindir)/foo$(INSTALL_DATA) libfoo.a $(libdir)/libfoo.a

Optionally, you may prepend the value of DESTDIR to the target filename. Doing this allows the installer to create a snapshot of the installation to be copied onto the real target filesystem later. Do not set the value of DESTDIR in your Makefile, and do not include it in any installed files. With support for DESTDIR, the above examples become:

$(INSTALL_PROGRAM) foo $(DESTDIR)$(bindir)/foo $(INSTALL_DATA) libfoo.a $(DESTDIR)$(libdir)/libfoo.a

Always use a file name, not a directory name, as the second argument of the installation commands. Use a separate command for each file to be installed.

Variables for Installation Directories

Installation directories should always be named by variables, so it is easy to install in a nonstandard place. The standard names for these variables are described below. They are based on a standard filesystem layout; variants of it are used in SVR4, 4.4BSD, Linux, Ultrix v4, and other modern operating systems.

These two variables set the root for the installation. All the other installation directories should be subdirectories of one of these two, and nothing should be directly installed into these two directories.

prefix

A prefix used in constructing the default values of the variables listed below. The default value of prefix should be /usr/local. When building the complete GNU system, the prefix will be empty and /usr will be a symbolic link to /. (If you are using Autoconf, write it as @prefix@.) Running make install with a different value of prefix from the one used to build the program should not recompile the program.

exec\_prefix

A prefix used in constructing the default values of some of the variables listed below. The default value of exec_prefix should be $(prefix). (If you are using Autoconf, write it as @exec\_prefix@.) Generally, $(exec_prefix) is used for directories that contain machine-specific files (such as executables and subroutine libraries), while $(prefix) is used directly for other directories. Running make install with a different value of exec_prefix from the one used to build the program should not recompile the program.

Executable programs are installed in one of the following directories.

bindir

The directory for installing executable programs that users can run. This should normally be /usr/local/bin, but write it as $(exec\_prefix)/bin. (If you are using Autoconf, write it as @bindir@.)

sbindir

The directory for installing executable programs that can be run from the shell, but are only generally useful to system administrators. This should normally be /usr/local/sbin, but write it as $(exec\_prefix)/sbin. (If you are using Autoconf, write it as @sbindir@.)

libexecdir

The directory for installing executable programs to be run by other programs rather than by users. This directory should normally be /usr/local/libexec, but write it as $(exec\_prefix)/libexec. (If you are using Autoconf, write it as @libexecdir@.)

Data files used by the program during its execution are divided into categories in two ways.

  • Some files are normally modified by programs; others are never normally modified (though users may edit some of these).
  • Some files are architecture-independent and can be shared by all machines at a site; some are architecture-dependent and can be shared only by machines of the same kind and operating system; others may never be shared between two machines.

This makes for six different possibilities. However, we want to discourage the use of architecture-dependent files, aside from object files and libraries. It is much cleaner to make other data files architecture-independent, and it is generally not hard.

Therefore, here are the variables Makefiles should use to specify directories:

datadir

The directory for installing read-only architecture independent data files. This should normally be /usr/local/share, but write it as $(prefix)/share. (If you are using Autoconf, write it as @datadir@.) As a special exception, see $(infodir) and $(includedir) below.

sysconfdir

The directory for installing read-only data files that pertain to a single machine–that is to say, files for configuring a host. Mailer and network configuration files, /etc/passwd, and so forth belong here. All the files in this directory should be ordinary ASCII text files. This directory should normally be /usr/local/etc, but write it as $(prefix)/etc. (If you are using Autoconf, write it as @sysconfdir@.) Do not install executables here in this directory (they probably belong in $(libexecdir) or $(sbindir)). Also do not install files that are modified in the normal course of their use (programs whose purpose is to change the configuration of the system excluded). Those probably belong in $(localstatedir).

sharedstatedir

The directory for installing architecture-independent data files which the programs modify while they run. This should normally be /usr/local/com, but write it as $(prefix)/com. (If you are using Autoconf, write it as @sharedstatedir@.)

localstatedir

The directory for installing data files which the programs modify while they run, and that pertain to one specific machine. Users should never need to modify files in this directory to configure the package’s operation; put such configuration information in separate files that go in $(datadir) or $(sysconfdir). $(localstatedir) should normally be /usr/local/var, but write it as $(prefix)/var. (If you are using Autoconf, write it as @localstatedir@.)

libdir

The directory for object files and libraries of object code. Do not install executables here, they probably ought to go in $(libexecdir) instead. The value of libdir should normally be /usr/local/lib, but write it as $(exec\_prefix)/lib. (If you are using Autoconf, write it as @libdir@.)

infodir

The directory for installing the Info files for this package. By default, it should be /usr/local/info, but it should be written as $(prefix)/info. (If you are using Autoconf, write it as @infodir@.)

lispdir

The directory for installing any Emacs Lisp files in this package. By default, it should be /usr/local/share/emacs/site-lisp, but it should be written as $(prefix)/share/emacs/site-lisp. If you are using Autoconf, write the default as @lispdir@. In order to make @lispdir@ work, you need the following lines in your configure.in file:

lispdir=’${datadir}/emacs/site-lisp’AC_SUBST(lispdir)

includedir

The directory for installing header files to be included by user programs with the C #include preprocessor directive. This should normally be /usr/local/include, but write it as $(prefix)/include. (If you are using Autoconf, write it as @includedir@.) Most compilers other than GCC do not look for header files in directory /usr/local/include. So installing the header files this way is only useful with GCC. Sometimes this is not a problem because some libraries are only really intended to work with GCC. But some libraries are intended to work with other compilers. They should install their header files in two places, one specified by includedir and one specified by oldincludedir.

oldincludedir

The directory for installing #include header files for use with compilers other than GCC. This should normally be /usr/include. (If you are using Autoconf, you can write it as @oldincludedir@.) The Makefile commands should check whether the value of oldincludedir is empty. If it is, they should not try to use it; they should cancel the second installation of the header files. A package should not replace an existing header in this directory unless the header came from the same package. Thus, if your Foo package provides a header file foo.h, then it should install the header file in the oldincludedir directory if either (1) there is no foo.h there or (2) the foo.h that exists came from the Foo package. To tell whether foo.h came from the Foo package, put a magic string in the file–part of a comment–and grep for that string.

Unix-style man pages are installed in one of the following:

mandir

The top-level directory for installing the man pages (if any) for this package. It will normally be /usr/local/man, but you should write it as $(prefix)/man. (If you are using Autoconf, write it as @mandir@.)

man1dir

The directory for installing section 1 man pages. Write it as $(mandir)/man1.

man2dir

The directory for installing section 2 man pages. Write it as $(mandir)/man2

...

Don’t make the primary documentation for any GNU software be a man page. Write a manual in Texinfo instead. Man pages are just for the sake of people running GNU software on Unix, which is a secondary application only.

manext

The file name extension for the installed man page. This should contain a period followed by the appropriate digit; it should normally be .1.

man1ext

The file name extension for installed section 1 man pages.

man2ext

The file name extension for installed section 2 man pages.

...

Use these names instead of manext if the package needs to install man pages in more than one section of the manual.

And finally, you should set the following variable:

srcdir

The directory for the sources being compiled. The value of this variable is normally inserted by the configure shell script. (If you are using Autconf, use srcdir = @srcdir@.)

For example:

# Common prefix for installation directories.
# NOTE: This directory must exist when you start the install.prefix = /usr/local exec_prefix = $(prefix)
# Where to put the executable for the command `gcc’. bindir = $(exec_prefix)/bin
# Where to put the directories used by the compiler. libexecdir = $(exec_prefix)/libexec
# Where to put the Info files. infodir = $(prefix)/info

If your program installs a large number of files into one of the standard user-specified directories, it might be useful to group them into a subdirectory particular to that program. If you do this, you should write the install rule to create these subdirectories.

Do not expect the user to include the subdirectory name in the value of any of the variables listed above. The idea of having a uniform set of variable names for installation directories is to enable the user to specify the exact same values for several different GNU packages. In order for this to be useful, all the packages must be designed so that they will work sensibly when the user does so.

Standard Targets for Users

All GNU programs should have the following targets in their Makefiles:

all

Compile the entire program. This should be the default target. This target need not rebuild any documentation files; Info files should normally be included in the distribution, and DVI files should be made only when explicitly asked for. By default, the Make rules should compile and link with -g, so that executable programs have debugging symbols. Users who don’t mind being helpless can strip the executables later if they wish.

install

Compile the program and copy the executables, libraries, and so on to the file names where they should reside for actual use. If there is a simple test to verify that a program is properly installed, this target should run that test. Do not strip executables when installing them. Devil-may-care users can use the install-strip target to do that. If possible, write the install target rule so that it does not modify anything in the directory where the program was built, provided make all has just been done. This is convenient for building the program under one user name and installing it under another. The commands should create all the directories in which files are to be installed, if they don’t already exist. This includes the directories specified as the values of the variables prefix and exec_prefix, as well as all subdirectories that are needed. One way to do this is by means of an installdirs target as described below. Use - before any command for installing a man page, so that make will ignore any errors. This is in case there are systems that don’t have the Unix man page documentation system installed. The way to install Info files is to copy them into $(infodir) with $(INSTALL_DATA) (see section Variables for Specifying Commands), and then run the install-info program if it is present. install-info is a program that edits the Info dir file to add or update the menu entry for the given Info file; it is part of the Texinfo package. Here is a sample rule to install an Info file:

$(DESTDIR)$(infodir)/foo.info: foo.info $(POST_INSTALL)
# There may be a newer info file in . than in srcdir. -if test -f foo.info; then d=.; \ else d=$(srcdir); fi; \ $(INSTALL_DATA) $$d/foo.info $(DESTDIR)$@; \
# Run install-info only if it exists.
# Use `if’ instead of just prepending `-’ to the
# line so we notice real errors from install-info.
# We use `$(SHELL) -c’ because some shells do not
# fail gracefully when there is an unknown command. if $(SHELL) -c ‘install-info –version’ \ >/dev/null 2>&1; then \ install-info –dir-file=$(DESTDIR)$(infodir)/dir \ $(DESTDIR)$(infodir)/foo.info; \ else true; fi

When writing the install target, you must classify all the commands into three categories: normal ones, pre-installation commands and post-installation commands. See section Install Command Categories.

uninstall

Delete all the installed files–the copies that the install target creates. This rule should not modify the directories where compilation is done, only the directories where files are installed. The uninstallation commands are divided into three categories, just like the installation commands. See section Install Command Categories.

install-strip

Like install, but strip the executable files while installing them. In many cases, the definition of this target can be very simple:

install-strip: $(MAKE) INSTALL_PROGRAM=’$(INSTALL_PROGRAM) -s’ \ install

Normally we do not recommend stripping an executable unless you are sure the program has no bugs. However, it can be reasonable to install a stripped executable for actual execution while saving the unstripped executable elsewhere in case there is a bug.

clean

Delete all files from the current directory that are normally created by building the program. Don’t delete the files that record the configuration. Also preserve files that could be made by building, but normally aren’t because the distribution comes with them. Delete .dvi files here if they are not part of the distribution.

distclean

Delete all files from the current directory that are created by configuring or building the program. If you have unpacked the source and built the program without creating any other files, make distclean should leave only the files that were in the distribution.

mostlyclean

Like clean, but may refrain from deleting a few files that people normally don’t want to recompile. For example, the mostlyclean target for GCC does not delete libgcc.a, because recompiling it is rarely necessary and takes a lot of time.

maintainer-clean

Delete almost everything from the current directory that can be reconstructed with this Makefile. This typically includes everything deleted by distclean, plus more: C source files produced by Bison, tags tables, Info files, and so on. The reason we say “almost everything” is that running the command make maintainer-clean should not delete configure even if configure can be remade using a rule in the Makefile. More generally, make maintainer-clean should not delete anything that needs to exist in order to run configure and then begin to build the program. This is the only exception; maintainer-clean should delete everything else that can be rebuilt. The maintainer-clean target is intended to be used by a maintainer of the package, not by ordinary users. You may need special tools to reconstruct some of the files that make maintainer-clean deletes. Since these files are normally included in the distribution, we don’t take care to make them easy to reconstruct. If you find you need to unpack the full distribution again, don’t blame us. To help make users aware of this, the commands for the special maintainer-clean target should start with these two:

@echo ‘This command is intended for maintainers to use; it’ @echo ‘deletes files that may need special tools to rebuild.’

TAGS

Update a tags table for this program.

info

Generate any Info files needed. The best way to write the rules is as follows:

info: foo.infofoo.info: foo.texi chap1.texi chap2.texi $(MAKEINFO) $(srcdir)/foo.texi

You must define the variable MAKEINFO in the Makefile. It should run the makeinfo program, which is part of the Texinfo distribution. Normally a GNU distribution comes with Info files, and that means the Info files are present in the source directory. Therefore, the Make rule for an info file should update it in the source directory. When users build the package, ordinarily Make will not update the Info files because they will already be up to date.

dvi

Generate DVI files for all Texinfo documentation. For example:

dvi: foo.dvifoo.dvi: foo.texi chap1.texi chap2.texi $(TEXI2DVI) $(srcdir)/foo.texi

You must define the variable TEXI2DVI in the Makefile. It should run the program texi2dvi, which is part of the Texinfo distribution. (3)Alternatively, write just the dependencies, and allow GNU make to provide the command.

dist

Create a distribution tar file for this program. The tar file should be set up so that the file names in the tar file start with a subdirectory name which is the name of the package it is a distribution for. This name can include the version number. For example, the distribution tar file of GCC version 1.40 unpacks into a subdirectory named gcc-1.40. The easiest way to do this is to create a subdirectory appropriately named, use ln or cp to install the proper files in it, and then tar that subdirectory. Compress the tar file file with gzip. For example, the actual distribution file for GCC version 1.40 is called gcc-1.40.tar.gz. The dist target should explicitly depend on all non-source files that are in the distribution, to make sure they are up to date in the distribution. See section `Making Releases’ in GNU Coding Standards.

check

Perform self-tests (if any). The user must build the program before running the tests, but need not install the program; you should write the self-tests so that they work when the program is built but not installed.

The following targets are suggested as conventional names, for programs in which they are useful.

installcheck

Perform installation tests (if any). The user must build and install the program before running the tests. You should not assume that $(bindir) is in the search path.

installdirs

It’s useful to add a target named installdirs to create the directories where files are installed, and their parent directories. There is a script called mkinstalldirs which is convenient for this; you can find it in the Texinfo package. You can use a rule like this:

# Make sure all installation directories (e.g. $(bindir))
# actually exist by making them if necessary.installdirs: mkinstalldirs $(srcdir)/mkinstalldirs $(bindir) $(datadir) \ $(libdir) $(infodir) \ $(mandir)

This rule should not modify the directories where compilation is done. It should do nothing but create installation directories.

Install Command Categories

When writing the install target, you must classify all the commands into three categories: normal ones, pre-installation commands and post-installation commands.

Normal commands move files into their proper places, and set their modes. They may not alter any files except the ones that come entirely from the package they belong to.

Pre-installation and post-installation commands may alter other files; in particular, they can edit global configuration files or data bases.

Pre-installation commands are typically executed before the normal commands, and post-installation commands are typically run after the normal commands.

The most common use for a post-installation command is to run install-info. This cannot be done with a normal command, since it alters a file (the Info directory) which does not come entirely and solely from the package being installed. It is a post-installation command because it needs to be done after the normal command which installs the package’s Info files.

Most programs don’t need any pre-installation commands, but we have the feature just in case it is needed.

To classify the commands in the install rule into these three categories, insert category lines among them. A category line specifies the category for the commands that follow.

A category line consists of a tab and a reference to a special Make variable, plus an optional comment at the end. There are three variables you can use, one for each category; the variable name specifies the category. Category lines are no-ops in ordinary execution because these three Make variables are normally undefined (and you should not define them in the makefile).

Here are the three possible category lines, each with a comment that explains what it means:

$(PRE_INSTALL)
# Pre-install commands follow. $(POST_INSTALL) # Post-install commands follow. $(NORMAL_INSTALL)
# Normal commands follow.

If you don’t use a category line at the beginning of the install rule, all the commands are classified as normal until the first category line. If you don’t use any category lines, all the commands are classified as normal.

These are the category lines for uninstall:

$(PRE_UNINSTALL) # Pre-uninstall commands follow. $(POST_UNINSTALL)
# Post-uninstall commands follow. $(NORMAL_UNINSTALL)
# Normal commands follow.

Typically, a pre-uninstall command would be used for deleting entries from the Info directory.

If the install or uninstall target has any dependencies which act as subroutines of installation, then you should start each dependency’s commands with a category line, and start the main target’s commands with a category line also. This way, you can ensure that each command is placed in the right category regardless of which of the dependencies actually run.

Pre-installation and post-installation commands should not run any programs except for these:

[ basename bash cat chgrp chmod chown cmp cp dd diff echo egrep expand expr false fgrep find getopt grep gunzip gzip hostname install install-info kill ldconfig ln ls md5sum mkdir mkfifo mknod mv printenv pwd rm rmdir sed sort tee test touch true uname xargs yes

The reason for distinguishing the commands in this way is for the sake of making binary packages. Typically a binary package contains all the executables and other files that need to be installed, and has its own method of installing them–so it does not need to run the normal installation commands. But installing the binary package does need to execute the pre-installation and post-installation commands.

Programs to build binary packages work by extracting the pre-installation and post-installation commands. Here is one way of extracting the pre-installation commands:

make -n install -o all \ PRE_INSTALL=pre-install \ POST_INSTALL=post-install \ NORMAL_INSTALL=normal-install \ | gawk -f pre-install.awk

where the file pre-install.awk could contain this:

$0 ~ /^\t[ \t]*(normal_install|post_install)[ \t]*$/ {on = 0}on {print $0} $0 ~ /^\t[ \t]*pre_install[ \t]*$/ {on = 1}

The resulting file of pre-installation commands is executed as a shell script as part of installing the binary package.

Quick Reference

This appendix summarizes the directives, text manipulation functions, and special variables which GNU make understands. See section Special Built-in Target Names, section Catalogue of Implicit Rules, and section Summary of Options, for other summaries.

Here is a summary of the directives GNU make recognizes:

define variable

endef

Define a multi-line, recursively-expanded variable.
See section Defining Canned Command Sequences.

ifdef variable

ifndef variable

ifeq (a,b)

ifeq “a” “b”

ifeq ‘a’ ‘b’

ifneq (a,b)

ifneq “a” “b”

ifneq ‘a’ ‘b’

else

endif

Conditionally evaluate part of the makefile.
See section Conditional Parts of Makefiles.

include file

-include file

sinclude file

Include another makefile.
See section Including Other Makefiles.

override variable = value

override variable := value

override variable += value

override variable ?= value

override define variable

endef

Define a variable, overriding any previous definition, even one from the command line.
See section The override Directive.

export

Tell make to export all variables to child processes by default.
See section Communicating Variables to a Sub-make.

export variable

export variable = value

export variable := value

export variable += value

export variable ?= value

unexport variable

Tell make whether or not to export a particular variable to child processes.
See section Communicating Variables to a Sub-make.

vpath pattern path

Specify a search path for files matching a % pattern.
See section The vpath Directive.

vpath pattern

Remove all search paths previously specified for pattern.

vpath

Remove all search paths previously specified in any vpath directive.

Here is a summary of the text manipulation functions (see section Functions for Transforming Text):

$(subst from,to,text)

Replace from with to in text.
See section Functions for String Substitution and Analysis.

$(patsubst pattern,replacement,text)

Replace words matching pattern with replacement in text.
See section Functions for String Substitution and Analysis.

$(strip string)

Remove excess whitespace characters from string.
See section Functions for String Substitution and Analysis.

$(findstring find,text)

Locate find in text.
See section Functions for String Substitution and Analysis.

$(filter pattern…,text)

Select words in text that match one of the pattern words.
See section Functions for String Substitution and Analysis.

$(filter-out pattern…,text)

Select words in text that do not match any of the pattern words.
See section Functions for String Substitution and Analysis.

$(sort list)

Sort the words in list lexicographically, removing duplicates.
See section Functions for String Substitution and Analysis.

$(dir names…)

Extract the directory part of each file name.
See section Functions for File Names.

$(notdir names…)

Extract the non-directory part of each file name.
See section Functions for File Names.

$(suffix names…)

Extract the suffix (the last . and following characters) of each file name.
See section Functions for File Names.

$(basename names…)

Extract the base name (name without suffix) of each file name.
See section Functions for File Names.

$(addsuffix suffix,names…)

Append suffix to each word in names.
See section Functions for File Names.

$(addprefix prefix,names…)

Prepend prefix to each word in names.
See section Functions for File Names.

$(join list1,list2)

Join two parallel lists of words.
See section Functions for File Names.

$(word n,text)

Extract the nth word (one-origin) of text.
See section Functions for File Names.

$(words text)

Count the number of words in text.
See section Functions for File Names.

$(wordlist s,e,text)

Returns the list of words in text from s to e.
See section Functions for File Names.

$(firstword names…)

Extract the first word of names.
See section Functions for File Names.

$(wildcard pattern…)

Find file names matching a shell file name pattern (not a % pattern).
See section The Function wildcard.

$(error text…)

When this function is evaluated, make generates a fatal error with the message text.
See section Functions That Control Make.

$(warning text…)

When this function is evaluated, make generates a warning with the message text.
See section Functions That Control Make.

$(shell command)

Execute a shell command and return its output.
See section The shell Function.

$(origin variable)

Return a string describing how the make variable variable was defined.
See section The origin Function.

$(foreach var,words,text)

Evaluate text with var bound to each word in words, and concatenate the results.
See section The foreach Function.

$(call var,param,…)

Evaluate the variable var replacing any references to $(1), $(2) with the first, second, etc. param values.
See section The call Function.

Here is a summary of the automatic variables. See section Automatic Variables, for full information.

$@

The file name of the target.

$%

The target member name, when the target is an archive member.

$<

The name of the first prerequisite.

$?

The names of all the prerequisites that are newer than the target, with spaces between them. For prerequisites which are archive members, only the member named is used (see section Using make to Update Archive Files).

$^

$+

The names of all the prerequisites, with spaces between them. For prerequisites which are archive members, only the member named is used (see section Using make to Update Archive Files). The value of $^ omits duplicate prerequisites, while $+ retains them and preserves their order.

$*

The stem with which an implicit rule matches (see section How Patterns Match).

$(@D)

$(@F)

The directory part and the file-within-directory part of $@.

$(*D)

$(*F)

The directory part and the file-within-directory part of $*.

$(%D)

$(%F)

The directory part and the file-within-directory part of $%.

$(<D)

$(<F)

The directory part and the file-within-directory part of $<.

$(^D)

$(^F)

The directory part and the file-within-directory part of $^.

$(+D)

$(+F)

The directory part and the file-within-directory part of $+.

$(?D)

$(?F)

The directory part and the file-within-directory part of $?.

These variables are used specially by GNU make:

MAKEFILES

Makefiles to be read on every invocation of make.
See section The Variable MAKEFILES.

VPATH

Directory search path for files not found in the current directory.
See section VPATH: Search Path for All Prerequisites.

SHELL

The name of the system default command interpreter, usually /bin/sh. You can set SHELL in the makefile to change the shell used to run commands. See section Command Execution.

MAKESHELL

On MS-DOS only, the name of the command interpreter that is to be used by make. This value takes precedence over the value of SHELL. See section Command Execution.

MAKE

The name with which make was invoked. Using this variable in commands has special meaning. See section How the MAKE Variable Works.

MAKELEVEL

The number of levels of recursion (sub-makes).
See section Communicating Variables to a Sub-make.

MAKEFLAGS

The flags given to make. You can set this in the environment or a makefile to set flags.
See section Communicating Options to a Sub-make. It is never appropriate to use MAKEFLAGS directly on a command line: its contents may not be quoted correctly for use in the shell. Always allow recursive make’s to obtain these values through the environment from its parent.

MAKECMDGOALS

The targets given to make on the command line. Setting this variable has no effect on the operation of make.
See section Arguments to Specify the Goals.

CURDIR

Set to the pathname of the current working directory (after all -C options are processed, if any). Setting this variable has no effect on the operation of make.
See section Recursive Use of make.

SUFFIXES

The default list of suffixes before make reads any makefiles.

.LIBPATTERNS

Defines the naming of the libraries make searches for, and their order.
See section Directory Search for Link Libraries.

Errors Generated by Make

Here is a list of the more common errors you might see generated by make, and some information about what they mean and how to fix them.

Sometimes make errors are not fatal, especially in the presence of a - prefix on a command script line, or the -k command line option. Errors that are fatal are prefixed with the string ***.

Error messages are all either prefixed with the name of the program (usually make), or, if the error is found in a makefile, the name of the file and linenumber containing the problem.

In the table below, these common prefixes are left off.

\[foo\] Error NN

\[foo\] signal description

These errors are not really make errors at all. They mean that a program that make invoked as part of a command script returned a non-0 error code (Error NN), which make interprets as failure, or it exited in some other abnormal fashion (with a signal of some type). See section Errors in Commands. If no *** is attached to the message, then the subprocess failed but the rule in the makefile was prefixed with the - special character, so make ignored the error.

missing separator. Stop.

missing separator (did you mean TAB instead of 8 spaces?). Stop.

This means that make could not understand much of anything about the command line it just read. GNU make looks for various kinds of separators (:, =, TAB characters, etc.) to help it decide what kind of commandline it’s seeing. This means it couldn’t find a valid one. One of the most common reasons for this message is that you (or perhaps your oh-so-helpful editor, as is the case with many MS-Windows editors) have attempted to indent your command scripts with spaces instead of a TAB character. In this case, make will use the second form of the error above. Remember that every line in the command script must begin with a TAB character. Eight spaces do not count. See section Rule Syntax.

commands commence before first target. Stop.

missing rule before commands. Stop.

This means the first thing in the makefile seems to be part of a command script: it begins with a TAB character and doesn’t appear to be a legal make command (such as a variable assignment). Command scripts must always be associated with a target. The second form is generated if the line has a semicolon as the first non-whitespace character; make interprets this to mean you left out the “target: prerequisite” section of a rule. See section Rule Syntax.

No rule to make target &grave;xxx.'

No rule to make target &grave;xxx, needed by yyy.'

This means that make decided it needed to build a target, but then couldn’t find any instructions in the makefile on how to do that, either explicit or implicit (including in the default rules database). If you want that file to be built, you will need to add a rule to your makefile describing how that target can be built. Other possible sources of this problem are typos in the makefile (if that filename is wrong) or a corrupted source tree (if that file is not supposed to be built, but rather only a prerequisite).

No targets specified and no makefile found. Stop.

No targets. Stop.

The former means that you didn’t provide any targets to be built on the command line, and make couldn’t find any makefiles to read in. The latter means that some makefile was found, but it didn’t contain any default target and none was given on the command line. GNU make has nothing to do in these situations. See section Arguments to Specify the Makefile.

Makefile &grave;xxx was not found.'

Included makefile &grave;xxx was not found.'

A makefile specified on the command line (first form) or included (second form) was not found.

warning: overriding commands for target &grave;xxx'

warning: ignoring old commands for target &grave;xxx'

GNU make allows commands to be specified only once per target (except for double-colon rules). If you give commands for a target which already has been defined to have commands, this warning is issued and the second set of commands will overwrite the first set. See section Multiple Rules for One Target.

Circular xxx \<- yyy dependency dropped.

This means that make detected a loop in the dependency graph: after tracing the prerequisite yyy of target xxx, and its prerequisites, etc., one of them depended on xxx again.

Recursive variable &grave;xxx references itself (eventually). Stop.'

This means you’ve defined a normal (recursive) make variable xxx that, when it’s expanded, will refer to itself (xxx). This is not allowed; either use simply-expanded variables (:=) or use the append operator (+=). See section How to Use Variables.

Unterminated variable reference. Stop.

This means you forgot to provide the proper closing parenthesis or brace in your variable or function reference.

insufficient arguments to function &grave;xxx. Stop.'

This means you haven’t provided the requisite number of arguments for this function. See the documentation of the function for a description of its arguments. See section Functions for Transforming Text.

missing target pattern. Stop.

multiple target patterns. Stop.

target pattern contains no &grave;%. Stop.'

These are generated for malformed static pattern rules. The first means there’s no pattern in the target section of the rule, the second means there are multiple patterns in the target section, and the third means the target doesn’t contain a pattern character (%). See section Syntax of Static Pattern Rules.

warning: -jN forced in submake: disabling jobserver mode.

This warning and the next are generated if make detects error conditions related to parallel processing on systems where sub-makes can communicate (see section Communicating Options to a Sub-make). This warning is generated if a recursive invocation of a make process is forced to have -jN in its argument list (where N is greater than one). This could happen, for example, if you set the MAKE environment variable to make -j2. In this case, the sub-make doesn’t communicate with other make processes and will simply pretend it has two jobs of its own.

warning: jobserver unavailable: using -j1. Add &grave;+ to parent make rule.'

In order for make processes to communicate, the parent will pass information to the child. Since this could result in problems if the child process isn’t actually a make, the parent will only do this if it thinks the child is a make. The parent uses the normal algorithms to determine this (see section How the MAKE Variable Works). If the makefile is constructed such that the parent doesn’t know the child is a make process, then the child will receive only part of the information necessary. In this case, the child will generate this warning message and proceed with its build in a sequential manner.

Complex Makefile Example

Here is the makefile for the GNU tar program. This is a moderately complex makefile.

Because it is the first target, the default goal is all. An interesting feature of this makefile is that testpad.h is a source file automatically created by the testpad program, itself compiled from testpad.c.

If you type make or make all, then make creates the tar executable, the rmt daemon that provides remote tape access, and the tar.info Info file.

If you type make install, then make not only creates tar, rmt, and tar.info, but also installs them.

If you type make clean, then make removes the .o files, and the tar, rmt, testpad, testpad.h, and core files.

If you type make distclean, then make not only removes the same files as does make clean but also the TAGS, Makefile, and config.status files. (Although it is not evident, this makefile (and config.status) is generated by the user with the configure program, which is provided in the tar distribution, but is not shown here.)

If you type make realclean, then make removes the same files as does make distclean and also removes the Info files generated from tar.texinfo.

In addition, there are targets shar and dist that create distribution kits.

# Generated automatically from Makefile.in by configure.
# Un*x Makefile for GNU tar program.
# Copyright (C) 1991 Free Software Foundation, Inc.

# This program is free software; you can redistribute
# it and/or modify it under the terms of the GNU
# General Public License …

SHELL = /bin/sh

#### Start of system configuration section. ####

srcdir = .

# If you use gcc, you should either run the
# fixincludes script that comes with it or else use
# gcc with the -traditional option.  Otherwise ioctl
# calls will be compiled incorrectly on some systems.
CC = gcc -O
YACC = bison -y
INSTALL = /usr/local/bin/install -c
INSTALLDATA = /usr/local/bin/install -c -m 644

# Things you might add to DEFS:
# -DSTDC_HEADERS        If you have ANSI C headers and
#                       libraries.
# -DPOSIX               If you have POSIX.1 headers and
#                       libraries.
# -DBSD42               If you have sys/dir.h (unless
#                       you use -DPOSIX), sys/file.h,
#                       and st_blocks in `struct stat’.
# -DUSG                 If you have System V/ANSI C
#                       string and memory functions
#                       and headers, sys/sysmacros.h,
#                       fcntl.h, getcwd, no valloc,
#                       and ndir.h (unless
#                       you use -DDIRENT).
# -DNO_MEMORY_H         If USG or STDC_HEADERS but do not
#                       include memory.h.
# -DDIRENT              If USG and you have dirent.h
#                       instead of ndir.h.
# -DSIGTYPE=int         If your signal handlers
#                       return int, not void.
# -DNO_MTIO             If you lack sys/mtio.h
#                       (magtape ioctls).
# -DNO_REMOTE           If you do not have a remote shell
#                       or rexec.
# -DUSE_REXEC           To use rexec for remote tape
#                       operations instead of
#                       forking rsh or remsh.
# -DVPRINTF_MISSING     If you lack vprintf function
#                       (but have _doprnt).
# -DDOPRNT_MISSING      If you lack _doprnt function.
#                       Also need to define
#                       -DVPRINTF_MISSING.
# -DFTIME_MISSING       If you lack ftime system call.
# -DSTRSTR_MISSING      If you lack strstr function.
# -DVALLOC_MISSING      If you lack valloc function.
# -DMKDIR_MISSING       If you lack mkdir and
#                       rmdir system calls.
# -DRENAME_MISSING      If you lack rename system call.
# -DFTRUNCATE_MISSING   If you lack ftruncate
#                       system call.
# -DV7                  On Version 7 Unix (not
#                       tested in a long time).
# -DEMUL_OPEN3          If you lack a 3-argument version
#                       of open, and want to emulate it
#                       with system calls you do have.
# -DNO_OPEN3            If you lack the 3-argument open
#                       and want to disable the tar -k
#                       option instead of emulating open.
# -DXENIX               If you have sys/inode.h
#                       and need it 94 to be included.

DEFS =  -DSIGTYPE=int -DDIRENT -DSTRSTR_MISSING \
        -DVPRINTF_MISSING -DBSD42
# Set this to rtapelib.o unless you defined NO_REMOTE,
# in which case make it empty.
RTAPELIB = rtapelib.o
LIBS =
DEF_AR_FILE = /dev/rmt8
DEFBLOCKING = 20

CDEBUG = -g
CFLAGS = $(CDEBUG) -I. -I$(srcdir) $(DEFS) \
        -DDEF_AR_FILE=\"$(DEF_AR_FILE)\” \
        -DDEFBLOCKING=$(DEFBLOCKING)
LDFLAGS = -g

prefix = /usr/local
# Prefix for each installed program,
# normally empty or `g’.
binprefix =

# The directory to install tar in.
bindir = $(prefix)/bin

# The directory to install the info files in.
infodir = $(prefix)/info

#### End of system configuration section. ####

SRC1 =  tar.c create.c extract.c buffer.c \
        getoldopt.c update.c gnu.c mangle.c
SRC2 =  version.c list.c names.c diffarch.c \
        port.c wildmat.c getopt.c
SRC3 =  getopt1.c regex.c getdate.y
SRCS =  $(SRC1) $(SRC2) $(SRC3)
OBJ1 =  tar.o create.o extract.o buffer.o \
        getoldopt.o update.o gnu.o mangle.o
OBJ2 =  version.o list.o names.o diffarch.o \
        port.o wildmat.o getopt.o
OBJ3 =  getopt1.o regex.o getdate.o $(RTAPELIB)
OBJS =  $(OBJ1) $(OBJ2) $(OBJ3)
AUX =   README COPYING ChangeLog Makefile.in  \
        makefile.pc configure configure.in \
        tar.texinfo tar.info* texinfo.tex \
        tar.h port.h open3.h getopt.h regex.h \
        rmt.h rmt.c rtapelib.c alloca.c \
        msd_dir.h msd_dir.c tcexparg.c \
        level-0 level-1 backup-specs testpad.c

all:    tar rmt tar.info

tar:    $(OBJS)
        $(CC) $(LDFLAGS) -o $@ $(OBJS) $(LIBS)

rmt:    rmt.c
        $(CC) $(CFLAGS) $(LDFLAGS) -o $@ rmt.c

tar.info: tar.texinfo
        makeinfo tar.texinfo

install: all
        $(INSTALL) tar $(bindir)/$(binprefix)tar
        -test ! -f rmt || $(INSTALL) rmt /etc/rmt
        $(INSTALLDATA) $(srcdir)/tar.info* $(infodir)

$(OBJS): tar.h port.h testpad.h
regex.o buffer.o tar.o: regex.h
# getdate.y has 8 shift/reduce conflicts.

testpad.h: testpad
        ./testpad

testpad: testpad.o
        $(CC) -o $@ testpad.o

TAGS:   $(SRCS)
        etags $(SRCS)

clean:
        rm -f *.o tar rmt testpad testpad.h core

distclean: clean
        rm -f TAGS Makefile config.status

realclean: distclean
        rm -f tar.info*

shar: $(SRCS) $(AUX)
        shar $(SRCS) $(AUX) | compress \
          > tar-`sed -e ‘/version_string/!d’ \
                     -e ’s/[^0-9.]*\([0-9.]*\).*/\1/’ \
                     -e q
                     version.c`.shar.Z

dist: $(SRCS) $(AUX)
        echo tar-`sed \
             -e ‘/version_string/!d’ \
             -e ’s/[^0-9.]*\([0-9.]*\).*/\1/’ \
             -e q
             version.c` > .fname
        -rm -rf `cat .fname`
        mkdir `cat .fname`
        ln $(SRCS) $(AUX) `cat .fname`
        tar chZf `cat .fname`.tar.Z `cat .fname`
        -rm -rf `cat .fname` .fname

tar.zoo: $(SRCS) $(AUX)
        -rm -rf tmp.dir
        -mkdir tmp.dir
        -rm tar.zoo
        for X in $(SRCS) $(AUX) ; do \
            echo $$X ; \
            sed ’s/$$/^M/’ $$X \
            > tmp.dir/$$X ; done
        cd tmp.dir ; zoo aM ../tar.zoo *
        -rm -rf tmp.dir

Index of Concepts

# (comments), in commands

$

%

*

,

-

.

:

=

?

@

[

\

_

a

b

c

d

e

f

g

h

i

j

k

l

m

n

o

p

q

r

s

t

u

v

w

y

~

Index of Functions, Variables, & Directives

$

%

*

.

/

<

?

@

^

a

b

c

d

e

f

g

i

j

l

m

n

o

p

r

s

t

u

v

w

y

Footnotes

(1)

GNU Make compiled for MS-DOS and MS-Windows behaves as if prefix has been defined to be the root of the DJGPP tree hierarchy.

(2)

On MS-DOS, the value of current working directory is global, so changing it will affect the following command lines on those systems.

(3)

texi2dvi uses TeX to do the real work of formatting. TeX is not distributed with Texinfo.

Topics 

  1. Introduction
  2. The Java® Event Model
  3. Laying Out User Interface Components
  4. Swing Component Overview

1. Introduction

Graphical programs require a very different programming model to the non-graphical programs we have encountered in the past. A non-graphical program typically runs straight through from beginning to end. By contrast, a graphical program should be capable of running indefinitely, accepting input through the graphical user interface (GUI) and responding accordingly. This kind of programming is known as event-driven programming, because the program’s sequence of operation is determined by events generated by the GUI components. The program responds to events by invoking functions known as event handlers . For example, pushing the Print button may generate a “button-pushed” event, which results in a call to an event handler named print().

In general, a graphical program consists of the following key elements:

  • Code to create GUI components, such as buttons, text areas, scrollable views, etc.
  • Code that lays out the components within a container. Examples of containers are frames, which are stand-alone windows, and applets, which are windows that are embedded within a web page.
  • Event handling code that specifies what should happen when the user interacts with the GUI components.
  • An event loop, whose job is to wait for events to occur and to call appropriate event handlers.

The following pseudo-code illustrates how the event loop might work

while (true) {                                        // The event loop.
// Get the next event from the event queue.
Event e = get_next_event();

// Process the events by calling appropriate event handlers.
if (e.eventType == QUIT) {
exit();                                           // Terminate the program.
}
else if (e.eventType == BUTTON_PUSHED) {
if (e.eventSource == PRINT_BUTTON)
print(e);                                   // Print out the current page.
else {

}
}
else {

}
}

In C++, the programmer must often explicitly write an event loop similar to the one shown above. This can involve a lot of work, so Java® attempts to shield the programmer from the actual event loop, while still providing a flexible way to specify how events are processed.

2. The Java® Event Model (JDK 1.1 and above)

(Ref. Java® Tutorial)

The Java® event model is based on the notion of event sources and event listeners.

An event source is most frequently a user interface component (such as a button, menu item or scrollable view), which can notify registered listeners when events of interest occur. Note that an event source may generate both high level events e.g. button click, as well as low level events, e.g. mouse press.

An event listener is an object that can register an interest in receiving certain types of events from an event source. The event source sends out event notifications by calling an appropriate event handling method in the event listener object.

The event listener registration and notification process takes place according to event type . An object wishing to listen to events of a particular type must implement the corresponding event listener interface . The interface simply specifies a standard set of event handling functions that the listener object must provide.

Here is a list of events, and their corresponding event types and event listener interfaces.

EVENT EVENT TYPE EVENT LISTENER INTERFACE
Button click, menu selection, text field entry ActionEvent ActionListener
Resizing, moving, showing or hiding a component ComponentEvent ComponentListener
Mouse press, mouse release, mouse click, mouse enter, mouse exit MouseEvent MouseListener
Mouse move, mouse drag MouseEvent MouseMotionListener
Key press, key release KeyEvent KeyListener
Gain keyboard focus, lose keyboard focus FocusEvent FocusListener
Window closing, window iconified, window deiconified WindowEvent WindowListener
Scrolling AdjustmentEvent AdjustmentListener
Item selection e.g. checkbox, list item ItemEvent ItemListener
Return key pressed TextEvent TextListener
Adding/removing a component to/from a container ContainerEvent ContainerListener

The general approach to implementing an event listener is the same in every case.

  • Write a class that implements the appropriate XXXListener interface.
  • Create an object of type XXXListener.
  • Register the event listener object with an event source by calling the event source’s addXXXListener method.

The following example shows how to create a frame. When the frame is closed, we want to make sure that the program terminates, since this does not happen automatically. We can use a WindowListener to do this.

import javax.swing.*;
import java.awt.event.*;

public class Main {
public static void main(String[] args) {
// Create a window.  Then set its size and make it visible.
JFrame frame = new JFrame(“Main window”);
frame.setSize(400,400);
frame.setVisible(true);

// Make the program terminate when the frame is closed.  We do this by registering a window listener
// to receive WindowEvents from the frame.  The window listener will provide an event handler called
// windowClosing, which will be called when the frame is closed.
WindowListener listener = new MyWindowListener();                 // A class that we write.
frame.addWindowListener(listener);
}
}

// Here is our window listener.  We are only interested in windowClosing, however, we must provide
// implementations for all of the methods in the WindowListener interface.
class MyWindowListener implements WindowListener {
public void windowClosing(WindowEvent e) {
System.out.println(“Terminating the program now.”);
System.exit(0);
}
public void windowClosed(WindowEvent e) {}
public void windowOpened(WindowEvent e) {}
public void windowActivated(WindowEvent e) {}
public void windowDeactivated(WindowEvent e) {}
public void windowIconified(WindowEvent e) {}
public void windowDeiconified(WindowEvent e) {}
}
 

Unfortunately, this example involves quite a lot of code. There are a couple of ways to simplify the program

Anonymous Classes

An anonymous class is a class that has no name. It is declared an instantiated within a single expression. Here is how we could use an anonymous class to simplify the closable frame example:

import javax.swing.*;
import java.awt.event.*;

public class Main {
public static void main(String[] args) {
// Create a window.  Then set its size and make it visible.
JFrame frame = new JFrame(“Main window”);
frame.setSize(400,400);
frame.setVisible(true);

// Make the frame closable.  Here we have used an anonymous class that implements the
// WindowListener interface.
frame.addWindowListener(new WindowListener() {
public void windowClosing(WindowEvent e) {
System.out.println(“Terminating the program now.”);
System.exit(0);
}
public void windowClosed(WindowEvent e) {}
public void windowOpened(WindowEvent e) {}
public void windowActivated(WindowEvent e) {}
public void windowDeactivated(WindowEvent e) {}
public void windowIconified(WindowEvent e) {}
public void windowDeiconified(WindowEvent e) {}
});
}
}

Event Adapters

An event adapter is just a class that implements an event listener interface, with empty definitions for all of the functions. The idea is that if we subclass the event adapter, we will only have to override the functions that we are interested in. The closable frame example can thus be shortened to:

import javax.swing.*;
import java.awt.event.*;

public class Main {
public static void main(String[] args) {
// Create a window.  Then set its size and make it visible.
JFrame frame = new JFrame(“Main window”);
frame.setSize(400,400);
frame.setVisible(true);

// Make the frame closable.  Here we have used an anonymous class that extends WindowAdapter.
frame.addWindowListener(new WindowAdapter() {
public void windowClosing(WindowEvent e) {    // This overrides the empty base class method.
System.out.println(“Terminating the program now.”);
System.exit(0);
}
});
}
}

3. Laying Out User Interface Components

Containers

(Ref. Java® Tutorial)

A Container is a GUI component that can hold other GUI components. Three commonly used container classes are

JFrame - This is a stand-alone window with a title bar, menubar and a border. It is typically used as the top-level container for a graphical Java® application.

JApplet - This is a container that can be embedded within an HTML page. It is typically used as the top-level container for a Java® applet.

JPanel - This is a container that must reside within another container. It provides a way to group several components (e.g. buttons) as a single unit, when they are laid out on the screen.  JPanel can also be used as an area for drawing operations. (When used in this way, it can provide automatic double buffering, which is a technique for producing flicker-free animation.)

A component object, myComponent, can be added to a container object, myContainer, using a statement of the form

    myContainer.getContentPane().add(myComponent);

The following example illustrates how to add a JButton instance to an instance of JFrame.

import javax.swing.*;
import java.awt.event.*;

public class Main {
public static void main(String[] args) {
// Create a window.
JFrame frame = new JFrame(“Main window”);
frame.setSize(400,400);

// Create a button and add it to the frame.
JButton button = new JButton(“Click me”);
frame.getContentPane().add(button);

// Add an event handler for button clicks.
button.addActionListener(new ActionListener() {
public void actionPerformed(ActionEvent e) {      // Only one method to implement.
System.out.println(e.getActionCommand());     // Prints out “Click me”.
}
});

// Make the frame closable.
frame.addWindowListener(new WindowAdapter() {
public void windowClosing(WindowEvent e) {
System.exit(0);
}
});

// Make the frame visible after adding the button.
frame.setVisible(true);
}
}

Layout Managers

(Ref. Java® Tutorial)

Our previous example has only one interesting GUI component: a JButton . What if we wanted to add a second JButton and perhaps a JTextArea, so that we can display the message through the GUI? We can control the layout of these components within the container by using a layout manager. Java® comes with six layout managers (five in java.awt and one in javax.swing)

FlowLayout - Lays out components in a line from left to right, moving to the next line when out of room. This layout style resembles the flow of text in a document.

BorderLayout - Lays out components in one of five positions - at the North, South, East or West borders, or else in the Center.

GridLayout - Lays out components in rows and columns of equal sized cells, like a spreadsheet.

GridBagLayout - Lays out components on a grid without requiring them to be of equal size. This is the most flexible and also the most complex of all the layout managers.

CardLayout - Lays out components like index cards, one behind another. (No longer useful, now that Swing provides a JTabbedPane component.)

BoxLayout - Lays out components with either vertical alignment or horizontal alignment. (A new layout manager in Swing.)

It is also possible to set a null layout manager and instead position components by specifying their absolute coordinates using the method

    public void setLocation(int x, int y)

Suppose we wish to position our two JButtons side by side, with the JTextArea positioned below them. We start by embedding the JButtons within a JPanel, using FlowLayout as the layout manager for the JPanel. The JTextArea is best placed within a JScrollPane , since this will permit scrolling when the amount of text exceeds the preferred size of the scroll pane. We can now attach the JPanel and the JScrollPane to the North and South borders of the JFrame, by using BorderLayout as the layout manager for the JFrame. These containment relationships are illustrated below:

JFrame

JPanel (attached to the North border of JFrame)

 

JButton

JButton

(laid out using FlowLayout)
 

JScrollPane (attached to the South border of JFrame)

JTextArea
 

Here is the implementation:

import javax.swing.*;
import java.awt.*;
import java.awt.event.*;

public class Main {
public static void main(String[] args) {
// Create a window and set its layout manager to be BorderLayout.
// (This happens to be the default layout manager for a JFrame.)
JFrame frame = new JFrame(“Main window”);
frame.setSize(400,400);
Container cf = frame.getContentPane();
cf.setLayout(new BorderLayout());

// Create a panel and set its layout manager to be FlowLayout.
// (This happens to be the default layout manager for a JPanel.)
JPanel panel = new JPanel();
panel.setLayout(new FlowLayout());     // No content pane for JPanel.

// Create two buttons and add them to the panel.
JButton button1 = new JButton(“Left”);
JButton button2 = new JButton(“Right”);
panel.add(button1);
panel.add(button2);

// Create a text area for displaying messages.  We embed the text
// area in a scroll pane so that it doesn’t grow unboundedly.
JTextArea textArea = new JTextArea();
JScrollPane scrollPane = new JScrollPane(textArea);
scrollPane.setPreferredSize(new Dimension(400, 100));
textArea.setEditable(false);

// Position the panel and the text area within the frame.
cf.add(panel, “North”);
cf.add(scrollPane, “South”);

// Add event handlers for button clicks.
class MyListener implements ActionListener {     // A local class.
private JTextArea mTextArea;
public void setTextArea(JTextArea t) {
mTextArea = t;
}
public void actionPerformed(ActionEvent e) {
mTextArea.append(e.getActionCommand()+"\n");
}
}
MyListener listener = new MyListener();
listener.setTextArea(textArea);      // Cannot do this with an anonymous class.
button1.addActionListener(listener);
button2.addActionListener(listener);

// Make the frame closable.
frame.addWindowListener(new WindowAdapter() {
public void windowClosing(WindowEvent e) {
System.exit(0);
}
});

// Make the frame visible after adding the components to it.
frame.setVisible(true);
}
}

4. Swing Component Overview

The components that we have seen so far are JFrame, JPanel, JButton, JTextArea and JScrollPane . The links below provide a good overview of the Swing components and how to use them.

Inheritance, Polymorphism and Virtual Functions

(Ref. Lippman 2.4, 17.1-17.6)

Inheritance allows us to specialize the behavior of a class. For example, we might write a Shape class, which provides basic functionality for managing 2D shapes. Our Shape class has member variables for the centroid and area. We may then specialize the Shape class to provide functionality for a particular shape, such as a circle. To do this, we write a class called Circle, which inherits the properties and methods of Shape.

class Circle : public Shape {

};

The Circle class adds a new member variable for the radius. Now, when we create a Circle object, it will have centroid and area variables (the Shape part) in addition to the radius variable (the Circle part). The Circle object can also call methods associated with class Shape, such as get_Centroid(). We refer to the Shape class as the base class and we refer to the Circle class as the derived class.

Members of class Shape that are private (e.g. mCentroid) cannot be directly accessed from within the Circle class definition. However, they can be accessed indirectly through the Shape class’s public interface (e.g. get_Centroid()). If we wish to allow class Circle to directly access members of class Shape, those members should be made protected (e.g. mfArea). To the outside world, i.e. in main(), protected members behave in exactly the same way as private members.

It is possible to use a base class pointer to address a derived class object, e.g.

Circle *pc
Shape *ps;

pc = new Circle();
ps = pc;

This feature is known as polymorphism. We can use the Shape pointer to access those methods of the Circle object that are inherited from Shape.  e.g.

    ps->get_Centroid()

The Circle class can also override functions that it inherits from the Shape class, as in the case of print(). To make this work, we must declare print() as a virtual function in class Shape. Then, when we use the Shape pointer to access the print() function, as in

    ps->print();

we will invoke the print() function in the underlying Circle object. In the example below, we have used an array of Shape pointers, sa, to store a heterogeneous collection of Circle and Rectangle objects. In the code fragment

for (i = 0; i < num_shapes; i++) {
sa[i]->print();    // This will call either Circle::print() or Rectangle::print(), as appropriate.
}

the decision to call the print() function in class Circle or the one in class Rectangle must be made at run-time. The mechanism by which virtual function calls are resolved is known as dynamic binding.

The implementation of the print() function in class Shape serves as a default implementation, which will be used if the derived class chooses not to provide an overiding implementation. It is possible, however, for the Shape class to require all derived classes to provide an overriding implementation, as in the case of draw(). The draw() function is known as a pure virtual function, because it does not have an implementation in class Shape. Pure virtual functions have a declaration of the form

virtual void draw() = 0;

Since we have not implemented draw() in class Shape, the class is incomplete and we cannot actually create Shape objects. The Shape class is therefore said to be an abstract base class.

We must take care when deleting the objects stored in the array of Shape pointers. In the code fragment

for (i = 0; i < num_shapes; i++)
delete sa[i];  // This will call either Circle::~Circle() or Rectangle::~Rectangle(), as appropriate,
// before calling Shape::~Shape().

we have called delete on sa[i], which is a Shape pointer, even though the object that it points to is really a Circle or a Rectangle. To ensure that the appropriate Circle or Rectangle destructor is called, we must make the Shape destructor a virtual destructor.
 

shape.h

#ifndef _SHAPE_H_
#define _SHAPE_H_

#include <iostream.h>
#include “point.h”

#ifndef DEBUG_PRINT
#ifdef _DEBUG
#define DEBUG_PRINT(str) cout << str << endl;
#else
#define DEBUG_PRINT(str)
#endif
#endif

class Shape {
// The private members of the Shape class are only accessible within
// the definition of class Shape. They are not accessible within
// the definitions of classes derived from the Shape class, e.g. Circle,
// or within main().
private:
Point mCentroid;

// The protected members of the Shape class are accessible within the
// definition of class Shape. They are also accessible within the
// definitions of classes derived immediately from the Shape class, e.g.
// Circle. However, they are not accessible within main().
protected:
float mfArea;

// The public members of the Shape class are accessible everywhere i.e. in
// the Shape class definition, in derived class definitions and in main().
public:
Shape(float fX, float fY);
virtual ~Shape();               // A virtual destructor.
virtual void print();           // A virtual function.
virtual void draw() = 0;    // A pure virtual function.
const Point& get_centroid() {
return mCentroid;
}
};

#endif
 

shape.C

#include “shape.h”

Shape::Shape(float fX, float fY) : mCentroid(fX, fY) {
// We must use an initialization list to initialize mCentroid.
// Here in the body of the constructor would be too late.
DEBUG_PRINT(“In constructor Shape::Shape(float, float)”)
}

Shape::~Shape() {
DEBUG_PRINT(“In destructor Shape::~Shape()”)
}

void Shape::print() {
DEBUG_PRINT(“In Shape::print()”)
cout << “Centroid: “;
mCentroid.print();
cout << “Area = " << mfArea << endl;
}
 

circle.h

#ifndef _CIRCLE_H_
#define _CIRCLE_H_

#include “shape.h”

class Circle : public Shape {
private:
float mfRadius;

public:
Circle(float fX=0, float fY=0, float fRadius=0);
~Circle();
void print();
void draw();
};

#endif
 

circle.C

#include “circle.h”
#define PI 3.1415926536

Circle::Circle(float fX, float fY, float fRadius) : Shape(fX, fY) {
// We must use an initialization list to initialize the Shape part of the Circle object.
DEBUG_PRINT(“In constructor Circle::Circle(float, float, float)”)
mfRadius = fRadius;
mfArea = PI * fRadius * fRadius;  // mfArea is a protected member of class Shape.
}

Circle::~Circle() {
DEBUG_PRINT(“In destructor Circle::~Circle()”)
}

void Circle::print() {
DEBUG_PRINT(“In Circle::print()”)
cout << “Circle Radius: " << mfRadius << endl;

// If we want to print out the Shape part of the Circle object as well,
// we could call the base class print function like this:
Shape::print();
}

void Circle::draw() {
// Assume that this draws the circle.
DEBUG_PRINT(“In Circle::draw()”)
}
 

rectangle.h

#ifndef _RECTANGLE_H_
#define _RECTANGLE_H_

#include “shape.h”

class Rectangle : public Shape {
private:
float mfWidth, mfHeight;

public:
Rectangle(float fX=0, float fY=0, float fWidth=1, float fHeight=1);
~Rectangle();
void print();
void draw();
};

#endif
 

rectangle.C

#include “rectangle.h”

Rectangle::Rectangle(float fX, float fY, float fWidth, float fHeight)  : Shape(fX, fY) {
// We must use an initialization list to initialize the Shape part of the Rectangle object.
DEBUG_PRINT(“In constructor Rectangle::Rectangle(float, float, float, float)”)
mfWidth = fWidth;
mfHeight = fHeight;
mfArea = fWidth * fHeight;  // mfArea is a protected member of class Shape.
}

Rectangle::~Rectangle() {
DEBUG_PRINT(“In destructor Rectangle::~Rectangle()”)
}

void Rectangle::print() {
DEBUG_PRINT(“In Rectangle::print()”)
cout << “Rectangle Width: " << mfWidth << " Height: " << mfHeight << endl;

// If we want to print out the Shape part of the Rectangle object as well,
// we could call the base class print function like this:
Shape::print();
}

void Rectangle::draw() {
// Assume that this draws the rectangle.
DEBUG_PRINT(“In Rectangle::draw()”)
}
 

myprog.C

#include “shape.h”
#include “circle.h”
#include “rectangle.h”

int main() {
const int num_shapes = 5;
int i;

// Create an automatic Circle object.
Circle c;

// We cannot instantiate a Shape object because the Shape class has a pure virtual function
// i.e. a virtual function without a definition within class Shape. Class Shape is therefore said
// to be an abstract base class.
// Shape s;  // This is not allowed.

// We are allowed to have Shape pointers, however.
Shape *sa[num_shapes];      // Create an array of Shape pointers.

// C++ allows us to use a base class pointer to point to a derived class object. This is known
// as polymorphism. We can thus store a heterogeneous collection of Circles and Rectangles
// using the array of Shape pointers.
sa[0] = new Circle(2,3,1);
sa[1] = new Rectangle(0,2,2,3);
sa[2] = new Circle(7,6,3);
sa[3] = new Circle(0,2,2);
sa[4] = new Rectangle(4,3,1,1);
 

// Print out all of the objects. We have made the print function virtual
// in class shape. This means that it can be overridden by print functions
// with a similar signature that are specific to the derived classes. If
// a derived class does not provide an implementation of print, then the
// Shape::print function will be called by default.
for (i = 0; i < num_shapes; i++) {
sa[i]->print();    // This will call either Circle::print() or Rectangle::print(), as appropriate.
}

// Delete the objects. Note that we have called delete on Shape pointers,
// even though the objects that we created using new were derived class
// objects. To ensure that the appropriate destructor for the derived
// object is called, we must make the Shape destructor virtual.
for (i = 0; i < num_shapes; i++)
delete sa[i];  // This will call either Circle::~Circle() or Rectangle::~Rectangle(), as appropriate,
// before calling Shape::~Shape().

return 0;
}

This lecture is courtesy of Petros Komodromos.

Topics

  1. Introduction to Java® 3D
  2. Java® 3D References
  3. Examples and Applications
  4. Scene Graph Structure and basic Java® 3D concepts and classes
  5. A simple Java® 3D program
  6. Performance of Java® 3D

{{< anchor “1” >}}{{< /anchor >}}1. Introduction to Java® 3D

Java® 3D is a general-purpose, platform-independent, object-oriented API for 3D-graphics that enables high-level development of Java® applications and applets with 3D interactive rendering capabilities. With Java® 3D, 3D scenes can be built programmatically, or, alternatively, 3D content can be loaded from VRML or other external files. Java® 3D, as a part of the Java® Media APIs, integrates well with the other Java® technologies and APIs. For example, Java® 2D API can be used to plot selected results, while the Java® Media Framework (JMF) API can be used to capture and stream audio and video.

Java® 3D is based on a directed acyclic graph-based scene structure, known as scene graph, that is used for representing and rendering the scene. The scene structure is a tree-like diagram that contains nodes with all the necessary information to create and render the scene. In particular, the scene graph contains the nodes that are used to represent and transform all objects in the scene, and all viewing control parameters, i.e. all objects with information related to the viewing of the scene. The scene graph can be manipulated very easily and quickly allowing efficient rendering by following a certain optimal order and bypassing hidden parts of objects in the scene.

Java® 3D API has been developed under a joint collaboration between Intel, Silicon Graphics, Apple, and Sun, combining the related knowledge of these companies. It has been designed to be a platform-independent API concerning the host’s operating system (PC/Solaris/Irix/HPUX/Linux) and graphics (OpenGL/Direct3D) platform, as well as the input and output (display) devices. The implementation of Java® 3D is built on top of OpenGL, or Direct3D. The high level Java® 3D API allows rapid application development which is very critical, especially nowadays.

However, Java® 3D has some weaknesses such as the performance, which is inferior to that of OpenGL, and the limited access to the rendering pipeline details. It is also still under development and several bugs need to be fixed. Although Java® 3D cannot achieve peak performance, portability and rapid development advantages may overweigh the slight performance penalty for many applications.

The current version of the Java® 3D API is the Version 1.2, which works together with the Java® 2 Platform. Both APIs can be downloaded for free from the Java® products page of Sun.

 

{{< anchor “2” >}}{{< /anchor >}}2. Java® 3D References

The following list includes many links related to the Java® 3D API

Java® 3D is specified in the packages: javax.media.j3d and javax.vecmath. Supporting classes and utilities are provided in the com.sun.j3d packages.

{{< anchor “3” >}}{{< /anchor >}}3. Examples and Applications

The following are examples provided by Java® 3D in directories with the names as follows under the subdirectory java3d of the directory demo.

  • AlternateAppearance
  • Appearance
  • AppearanceMixed
  • AWT_Interaction: java AWTInteraction
  • Background
  • Billboard
  • ConicWorld: java  SimpleCylinder ; java  TexturedSphere
  • FourByFour: appletviewer fbf.html
  • GearTest: java GearBox
  • GeometryByReference
  • GeometryCompression
  • HelloUniverse
  • Lightwave
  • LOD
  • ModelClip
  • Morphing
  • ObjLoad
  • OffScreenCanvas3D
  • OrientedShape3D
  • PackageInfo
  • PickTest
  • PickText3D: java PickText3DGeometry
  • PlatformGeometry
  • PureImmediate
  • ReadRaster
  • Sound
  • SphereMotion: appletviewer SphereMotion.html
  • SplineAnim
  • Text2D
  • Text3D
  • TextureByReference
  • TextureTest
  • TickTockCollision:  java TickTockCollision
  • TickTockPicking
  • VirtualInputDevice

For example, on a Sun Ultra 10 workstation the files for the GearTest example are located under the subdirectory:_

mit/java_v1.2ref/distrib/sun4x_56/demo/java3d/GearTest_

Similarly, if you download Java® 3D on your computer, the examples are typically stored in subdirectories in the subdirectory demo\java3d of the directory where Java® has been downloaded, e.g. at C:\Java\jdk1.3\demo\java3d.

There are many fields in which Java® 3D can be used. The following are just a small selection of Java® 3D applications that are available on the net.

{{< anchor “4” >}}{{< /anchor >}}4. Scene Graph Structure and Basic Java® 3D Concepts and Classes

Scene graph: Content-View Branches

A Java® 3D scene is created as a tree-like graph structure, which is traversed during rendering. The scene graph structure contains nodes that represent either the actual objects of the scene, or, specifications that describe how to view the objects. Usually, there are two  branches in Java® 3D, the content branch, which contains the nodes that describe the actual objects in the scene, and  the view branch, which contains nodes that specify viewing related conditions. Usually, the content branch contains much larger number of nodes than the view branch.

The following image shows a basic Java® 3D graph scene, where the content branch is located on the left and the view branch on the right side of the graph:

Java® 3D applications construct individual graphic components as separate objects, called nodes, and connects them together into a tree-like scene graph, in which the objects and the viewing of them can easily be manipulated. The scene graph structure contains the description of the virtual universe, which represents the entire scene. All information concerning geometric objects, their attributes, position and orientation, as well as the viewing information are all contained into the scene graph.

The above scene graph consists of superstructure components, in particular a VirtualUniverse and a Locale object, and a two BranchGroup objects, which are attached to the superstructure. The one branch graph, rooted at the left BranchGroup node, is a content branch, containing all the relevant to the contents of the scene objects. The other branch, known as view branch, contains all the information related to the viewing and the rendering details of the scene.

The state of a shape node, or any other leaf node, is defined, during rendering, by the nodes that lie in the direct path between that node and the root node, i.e. the VirtualUniverse. For example, a TransformGroup node in a path between a leaf node and the scene’s root can change the position, orientation, and scale of the object represented by the leaf node.

SceneGraphObject Hierarchy

The Java® 3D node objects of a Java® 3D scene graph, which are instances of the Node class, may reference node component objects, which are instances of the class NodeComponent. The Node and NodeComponent classes are subclasses of the SceneGraphObject abstract class. Almost all objects that may be included in a scene graph are instances of subclasses of the SceneGraphObject class. A scene graph object is constructed by instantiating the corresponding class, and then, it can be accessed and manipulated using the provided set and get methods.

The following graph shows the class hierarchy of the major subclasses of the SceneGraphObject class:

Class Node and its subclasses

The abstract Class Node is the base class for almost all objects that constitute the scene graph. It has two subclasses the Group, and Leaf classes, which have many useful subclasses. Class Group is a superclass of, among others, the classes BranchGroup and TransformGroup. Class Leaf, which is used for nodes with no children, is a superclass of, among others, the classes Behavior, Light, Shape3D, and ViewPlatform. The ViewPlatform node is used to define from where the scene is viewed. In particular, it can be used to specify the location and the orientation of the point of view.

Class NodeComponent and its subclasses

Class NodeComponent is the base class for classes that represent attributes associated with the nodes of the scene graph. It is the superclass of all scene graph node component classes, such as the Appearance, Geometry, PointAttributes, and PolygonAttributes classes. NodeComponent objects are used to specify attributes for a node, such as the color and geometry of a shape node, i.e. a Shape3D node. In particular, a Shape3D node uses an Appearance and a Geometry objects, where the Appearance object is used to control how the associated geometry should be rendered by Java® 3D.

The geometry component information of a Shape3D node, i.e. its geometry and topology, can be specified in an instance of a subclass of the abstract Geometry class. A Geometry object is used as a component object of a Shape3D leaf node. Geometry objects consist of the following four generic geometric types. Each of these geometric types defines a visible object, or a set of objects.

For example, the GeometryArray is a subclass of the class Geometry, which itself extends the NodeComponent class, that is extended to create the various primitive types such as lines, triangle strips and quadrilaterals.

The IndexedGeometryArray object above contains separate integer arrays that index, among others, into arrays of positional coordinates specifying how vertices are connected to form geometry primitives. This class is extended to create the various indexed primitive types, such as IndexedLineArray, IndexedPointArray, and IndexedQuadArray.

Vertex data may be passed to the geometry array either by copying the data into the array using the existing methods, which is the default mode, or by passing a reference to the data.

The methods for setting positional coordinates, colors, normals, and texture coordinates, such as the method setCoordinates(), copy the data into the GeometryArray, which offers much flexibility in organizing the data.

Another set of methods allows data to be passed and accessed by reference, such as the setCoordRef3d() method, set a reference to user-supplied data, e.g. coordinate arrays. In order to enable the passing of data by reference, the BY__REFERENCE_ bit in the vertexFormat field of the constructor for the corresponding GeometryArray must be set accordingly. Data in any array that is referenced by a live or compiled GeometryArray object may only be modified using the updateData method assuming that the ALLOW_REF_DATA_WRITE capability bit is set accordingly, which can be done using the setCapability method.

The Appearance object defines all rendering state that control the way the associated geometry should be rendered. The rendering state consists of the following:

  • Point attributes: a PointAttributes object defines attributes used to define points, such as the size to be used
  • Line attributes: using a LineAttributes object attributes used to define lines, such as the width and pattern, can be defined
  • Polygon attributes: using a PolygonAttributes object the attributes used to define polygons, such as rasterization mode (i.e.. filled, lines, or points) are defined
  • Coloring attributes: a ColoringAttributes object is used to defines attributes used in color selection and shading.
  • Rendering attributes: defines rendering operations, such as whether invisible objects are rendered, using a RenderingAttributes object.
  • Transparency attributes: a TransparencyAttributes defines the attributes that affect transparency of the object
  • Material: a Material object defines the appearance of an object under illumination, such as the ambient color, specular color, diffuse color, emissive color, and shininess. It is used to control the color of the shape.
  • Texture: the texture image and filtering parameters used, when texture mapping is enabled, can be defined in a Texture object.
  • Texture attributes: a TextureAttributes object can be used to define the attributes that apply to texture mapping, such as the texture mode, texture transform, blend color, and perspective correction mode.
  • Texture coordinate generation: the attributes that apply to texture coordinate generation can be defined in a TexCoordGeneration object.
  • Texture unit state: array that defines texture state for each of N separate texture units allowing multiple textures to be applied to geometry. Each TextureUnitState object contains a Texture object, TextureAttributes, and TexCoordGeneration object for one texture unit.

VirtualUniverse

and Locale

After constructing a subgraph, it can be attached to a VirtualUniverse object through a high-resolution Locale object, which is itself attached to the virtual universe. The VirtualUniverse is the root of all Java® 3D scenes, while Locale objects are used for basic spatial placement. The attachment to a Locale object makes all objects in the attached subgraph live (i.e. drawable), while removing it from the locale reverses the effect. Any node added to a live scene graph becomes live. However, in order to be able to modify a live node the corresponding capability bits should be set accordingly.

Typically, a Java® 3D program has only one VirtualUniverse which consists of one, or more, Locale objects that may contain collections of subgraphs of the scene graph that are rooted by BranchGroup nodes, i.e. a large number of  branch graphs. Although a Locale has no explicit children, it may reference an arbitrary number of BranchGroup nodes. The subgraphs contain all the scene graph nodes that exist in the universe. A Locale node is used to accurately position a branch graph in a universe specifying a location within the virtual universe using high-resolution coordinates (HiResCoord), which represent 768 bits of floating point values. A Locale is positioned in a single VirtualUniverse node using one of its constructors.

The VirtualUniverse and Locale classes, as well as the View class, are subclasses of the basic superclass Object, as shown below:

 

Branch Graphs

A branch graph is a scene graph rooted in a BranchGroup node and can be used to point to the root of a scene graph branch. A graph branch can be added to the list of branch graphs of a Locale node using its addBranchGraph(BranchGroup bg) method. BranchGroup objects are the only objects that can be inserted into a Locale’s list of objects.

BranchGroup may be compiled by calling its compile method, which causes the entire subgraph to be compiled including any BranchGroup nodes that may be contained within the subgraph. A graph branch, rooted by a  BranchGroup node, becomes live when inserted into a virtual universe by attaching it to a Locale. However, if a BranchGroup is contained in another subgraph as a child of some other group node, it may not be attached to a Locale node.

Capability Bits, Making Live and Compiling

Certain optimizations can be done to achieve better performance by compiling a subgraph into an optimized internal format, prior to its attachment to a virtual universe. However, many set and get methods of objects that are part of a live or compiled scene graph cannot be accessed. In general, the set and get methods can be used only during the creation of a scene graph, except where explicitly allowed, in order to allow certain optimizations during rendering. The set and get methods that can be used when the object is live or compiled should be specified using a set of capability bits, which by default are disabled, prior to compiling or making live the object. The methods isCompiled() and isLive() can be used to find out whether a scene graph object is compiled or live. The methods setCapability() and getCapability() can be used to set properly the capability bits to allow access to the object’s methods. However, the less the capability bits that are enabled, the more optimizations can be performed during rendering.

Viewing Branch: ViewPlatform, * View,* Screen3D

The view branch has usually the following structure, consisting of nodes that control the viewing of the scene.

The view branch contains some scene graph viewing objects that can be used to define the viewing parameters and details, such as the ViewPlatformView, Screen3DPhysicalBody, and PhysicalEnvironment classes.

Java® 3D uses a viewing model that can be used to transform the position and direction of the viewing while the content branch remains unmodified. This is achieved with the use of the ViewPlatform and the View classes, to specify from where and how, respectively, the scene is being viewed.

The ViewPlatform node controls the position, orientation and scale of the viewer. A viewer can navigate through the virtual universe by changing the transformation in the scene graph hierarchy above the ViewPlatform node. The location of the viewer can be set using a TransformGroup node above the ViewPlatform node. The ViewPlatform node has an activation radius that is used together with the bounding volumes of BehaviorBackground and other nodes in order to determine whether the latter nodes should be scheduled, or turned on, respectively. The method setActivationRadius() can be used to set the activation radius.

View object connects to the ViewPlatform node in the scene graph, and specifies all viewing parameters of the rendering process of a 3D scene. Although it exists outside of the scene graph, it attaches to a ViewPlatform leaf node in the scene graph, using the method attachViewPlatform(ViewPlatform vp). A View object contains references to a PhysicalBody and a PhysicalEnvironment object, which can be set using the methods setPhysicalBody() and setPhysicalEnvironment(), respectively.

View object contains a list of Canvas3D objects where rendering of the view is done. The method addCanvas3D(Canvas3D c) of the class View can be used to add the provided Canvas3D object to the list of canvases of the View object.

Class Canvas3D extends the heavyweight class Canvas in order to achieve hardware acceleration, since a low rendering library, such as OpenGL, requires the rendering to be done in a native window to enable hardware acceleration.

Finally, all Canvas3D objects on the same physical display device refer to a Screen3D object, which contains all information about that particular display device. Screen3D can be obtained from the Canvas3D using the getScreen3D() method.

Default Coordinate System

The default coordinate system is a right-handed Cartesian coordinate system centered on the screen with the x and y-axes directed towards the right and top of the screen, respectively. The z-axis is, by default directed out of the screen towards the viewer, as shown below. The default distances are in meter and the angles in radians.     
 

Transformations

Class TransformGroup which extends the class Group can be used to set a spatial transformation, such as positioning, orientation, and scaling of its children through the use of a Transform3D object. A TransformGroup node enables the setting and use of a coordinate system relative to its parent coordinate system.

The Transform3D object of a TransformGroup object can be set using the method setTransform(Transform3D  t), which is used to set the transformation components of the Transform3D object to the ones of the passed parameter.

Transform3D object is a 4x4 double-precision matrix that is used to determine the transformations of a TransformGroup node, as shown in the following equation. The elements T{{< sub “00” >}}, T{{< sub “01” >}}, T{{< sub “02” >}}, T{{< sub “10” >}}, T{{< sub “11” >}}, T{{< sub “12” >}},T{{< sub “20” >}}, T{{< sub “21” >}}, and T{{< sub “22” >}} are used to set the rotation and scaling, and the T{{< sub “03” >}}, T{{< sub “13” >}}, and T{{< sub “23” >}} are used to set the translation.

As the scene graph is traversed by the Java® 3D renderer, the transformations specified by any transformation nodes accumulate. The transformations closer to the geometry nodes executed prior to the ones closer to the virtual universe node.

{{< anchor “5” >}}{{< /anchor >}}5. A Simple Java® 3D Program

A Java® 3D program builds a scene graph, using Java® 3D classes and methods, that can be rendered onto the screen.

The following program creates 2 color cubes and a sphere as shown to the snapshot that follows the code.

import java.awt.*;     
import javax.swing.*;     
import javax.media.j3d.*;     
import javax.vecmath.*;     
import java.awt.event.*;     
import com.sun.j3d.utils.geometry.*;

public class MyJava3D extends JFrame     
{     
//  Virtual Universe object.     
private VirtualUniverse universe;

//  Locale of the scene graph.     
private Locale locale;     
 

// BranchGroup for the Content Branch of the scene     
private BranchGroup contentBranch;

//  TransformGroup  node of the scene contents     
private TransformGroup contentsTransGr;     
 

// BranchGroup for the View Branch of the scene     
private BranchGroup viewBranch;

// ViewPlatform node, defines from where the scene is viewed.     
private ViewPlatform viewPlatform;

//  Transform group for the ViewPlatform node     
private TransformGroup vpTransGr;

//  View node, defines the View parameters.     
private View view;

// A PhysicalBody object can specify the user’s head     
PhysicalBody body;

// A PhysicalEnvironment object can specify the physical     
// environment in which the view will be generated     
PhysicalEnvironment environment;

// Drawing canvas for 3D rendering     
private Canvas3D canvas;

// Screen3D Object contains screen’s information     
private Screen3D screen;

private Bounds bounds;     
 

public MyJava3D()     
{     
super(“My First Java3D Example”);

// Creating and setting the Canvas3D     
canvas = new Canvas3D(null);     
getContentPane().setLayout( new BorderLayout( ) );     
getContentPane().add(canvas, “Center”);

// Setting the VirtualUniverse and the Locale nodes     
setUniverse();

// Setting the content branch     
setContent();

// Setting the view branch     
setViewing();

// To avoid problems between Java3D and Swing     
JPopupMenu.setDefaultLightWeightPopupEnabled(false);

// enabling window closing     
addWindowListener(new WindowAdapter() {     
public void windowClosing(WindowEvent e)     
{System.exit(0); }   });     
setSize(600, 600);     
bounds = new BoundingSphere(new Point3d(0.0,0.0,0.0), Double.MAX_VALUE);     
}     
 

private void setUniverse()     
{     
// Creating the VirtualUniverse and the Locale nodes     
universe = new VirtualUniverse();     
locale = new Locale(universe);     
}

private void setContent()     
{     
// Creating the content branch

contentsTransGr = new TransformGroup();     
contentsTransGr.setCapability(TransformGroup.ALLOW_TRANSFORM_WRITE);

setLighting();

ColorCube cube1 = new ColorCube(0.1);

Appearance appearance = new Appearance();     
cube1.setAppearance(appearance);

contentsTransGr.addChild(cube1);     
 

ColorCube cube2 = new ColorCube(0.25);

Transform3D t1 = new Transform3D();     
t1.rotZ(0.5);     
Transform3D t2 = new Transform3D();     
t2.set(new Vector3f(0.7f, 0.6f,-1.0f));     
t2.mul(t1);     
TransformGroup trans2 = new TransformGroup(t2);     
trans2.addChild(cube2);     
contentsTransGr.addChild(trans2);     
 

Sphere sphere = new Sphere(0.2f);     
Transform3D t3 = new Transform3D();     
t3.set(new Vector3f(-0.2f, 0.5f,-0.2f));     
TransformGroup trans3 = new TransformGroup(t3);

Appearance appearance3 = new Appearance();

Material mat = new Material();     
mat.setEmissiveColor(-0.2f, 1.5f, 0.1f);     
mat.setShininess(5.0f);     
appearance3.setMaterial(mat);     
sphere.setAppearance(appearance3);     
trans3.addChild(sphere);     
contentsTransGr.addChild(trans3);     
 

contentBranch = new BranchGroup();     
contentBranch.addChild(contentsTransGr);     
// Compiling the branch graph before making it live     
contentBranch .compile();

// Adding a branch graph into a locale makes its nodes live (drawable)     
locale.addBranchGraph(contentBranch);     
}

private void setLighting()     
{     
AmbientLight ambientLight =  new AmbientLight();     
ambientLight.setEnable(true);     
ambientLight.setColor(new Color3f(0.10f, 0.1f, 1.0f) );     
ambientLight.setCapability(AmbientLight.ALLOW_STATE_READ);     
ambientLight.setCapability(AmbientLight.ALLOW_STATE_WRITE);     
ambientLight.setInfluencingBounds(bounds);     
contentsTransGr.addChild(ambientLight);

DirectionalLight dirLight =  new DirectionalLight();     
dirLight.setEnable(true);     
dirLight.setColor( new Color3f( 1.0f, 0.0f, 0.0f ) );     
dirLight.setDirection( new Vector3f( 1.0f, -0.5f, -0.5f ) );     
dirLight.setCapability( AmbientLight.ALLOW_STATE_WRITE );     
dirLight.setInfluencingBounds(bounds);     
contentsTransGr.addChild(dirLight);     
}

private void setViewing()     
{     
// Creating the viewing branch

viewBranch = new BranchGroup();

// Setting the viewPlatform     
viewPlatform = new ViewPlatform();     
viewPlatform.setActivationRadius(Float.MAX_VALUE);     
viewPlatform.setBounds(bounds);

Transform3D t = new Transform3D();     
t.set(new Vector3f(0.3f, 0.7f, 3.0f));     
vpTransGr = new TransformGroup(t);

// Node capabilities control (granding permission) read and write access     
//  after a node is live or compiled     
//  The number of capabilities small to allow more optimizations during compilation     
vpTransGr.setCapability(TransformGroup.ALLOW_TRANSFORM_WRITE);     
vpTransGr.setCapability( TransformGroup.ALLOW_TRANSFORM_READ);

vpTransGr.addChild(viewPlatform);     
viewBranch.addChild(vpTransGr);

// Setting the view     
view = new View();     
view.setProjectionPolicy(View.PERSPECTIVE_PROJECTION );     
view.addCanvas3D(canvas);

body = new PhysicalBody();     
view.setPhysicalBody(body);     
environment = new PhysicalEnvironment();     
view.setPhysicalEnvironment(environment);

view.attachViewPlatform(viewPlatform);

view.setWindowResizePolicy(View.PHYSICAL_WORLD);

locale.addBranchGraph(viewBranch);     
}

public static void main(String[] args)     
{     
JFrame frame = new MyJava3D();     
frame.setVisible(true);

}     
}

 

A utility class, called SimpleUniverse, can alternatively be used to automatically build a common arrangement of a universe, locale, and viewing classes, avoiding the need to create explicitly the viewing branch. Then, a branch is added into the simple universe to make its nodes live (i.e. drawable).

SimpleUniverse simpleUniverse = new SimpleUniverse(canvas);     
simpleUniverse.addBranchGraph(contentBranch);

6. More on Java® 3D

Java® 3D and Swing

Since the Canvas3D extends the heavyweight AWT class Canvas, it should be handled with care if Swing is used. The information provided when mixing AWT and Swing components should be followed. The main problem is that there is one-to-one correspondence between heavyweight components and their window system peers, i.e. native OS window components. In contrast, a lightweight component expects to use the peer of its enclosing container since it does not have a peer.

When lightweight components overlap with heavyweight components, the heavyweight components are always painted on top. In general, the heavyweight Canvas3D of Java® 3D should be kept apart from lightweight Swing components using different containers to avoid problems.

To avoid heavyweight components overlapping Swing popup menus, which are lightweight, the popup menus  can be forced to be heavyweight using the method setLightWeightPopupEnabled() of the JPopupMenu class.

Similarly, problems with tooltips can be avoided by invoking the following method

ToolTipManager.sharedInstance().setLightWeightPopupEnabled(false)

Behaviors

Behaviors are essentially Java® methods that are scheduled to run only when certain requirements are satisfied according to wakeup conditions. Although a Behavior object is connected to the scene it is kept in a separate area of the Java® 3D runtime environment and it is not considered part of the scene graph. The runtime environment treats a Behavior object differently ensuring that certain actions take place. All behaviors in Java® 3D extend the Behavior class, which is an abstract class that itself extends the Leaf class. The Behavior class provides a way to execute certain statements, provided in the processStimulus() method, in order to modify the scene graph when specified criteria are satisfied.

The Behavior class has two major methods, in particular the initialize() method, which is called when the behavior becomes live, and the processStimulus() method, which is called by the Java® 3D scheduler whenever appropriate,  and  a scheduling region. Typically, in order to create a custom behavior, the class Behavior is extended and referenced by the scene graph from an appropriate place that should be able to effect. A custom behavior that extends the Behavior class should implement the initialize() and processStimulus() methods, and provide other methods and constructors that may be needed. The Behavior class object contains the state information that is needed by its initialize()and the processStimulus() methods. A constructor or another method may be used to set references to the scene graph objects upon which the behavior acts. In addition, the Behavior class is extended by the following three classes Billboard, Interpolator, and LOD.

The initialize() method is called once when the behavior becomes “live”, i.e. when its BranchGroup node is added to a VirtualUniverse, to initialize this behavior. The Java® 3D behavior scheduler calls the initialize() method, which should never be called directly. The initialize() method is used to set a Behavior object, which has been “added” to the scene graph, into a “known” condition and register the criteria to be used to decide on its execution. Classes that extend Behavior must provide their own initialize() method. The initialize() method allows a Behavior object to initialize its internal state and specify its initial wakeup conditions. Java® 3D automatically invokes a behavior’s initialize code when a BranchGroup node that contains the behavior is added to the virtual universe, i.e. becomes live. The initialize() method should return, since Java® 3D does not invoke the initialize method in a new thread, and therefore it must regain control. Finally, a wakeup condition must be set in order to be able to invoke the processStimulus() method of the behavior.

However, a Behavior object is considered active only when its scheduling bounds intersect the activation volume of a ViewPlatform node. Therefore, the scheduling bounds should be provided for a behavior in order to be able to receive stimuli. The scheduling bounds of a behavior can be specified as a bounded spatial volume, such as a sphere, using the method setSchedulingBounds(Bounds region). Bounds are used for selective scheduling to improve performance. Bounds are used to decide whether a behavior should be added to the list of scheduled behaviors.

The processStimulus() method is called whenever the wakeup criteria are satisfied and the ViewPlatform’s activation region intersect with the Behavior’s scheduling region. The method is called by the Java® 3D behavior scheduler when something happens that causes the behavior to execute. A stimulus, i.e. a notification, informs the behavior that it should execute its processStimulus() method. Therefore, applications should not call explicitly this method. Classes that extend the Behavior class must provide their own  processStimulus() method. The scheduling region defines a spatial volume that serves to enable the scheduling of Behavior nodes. A Behavior node is active, i.e. it can receive stimuli, whenever its scheduling region intersects the activation volume of a ViewPlatform.

The Java® 3D behavior scheduler invokes the processStimulus() method of a Behavior node when its scheduling region intersects the  activation volume of a ViewPlatform node and all wakeup criteria  of that behavior are satisfied. Then, the statements in the processStimulus() method may perform any computations and actions, such as including the registration of state change information that could cause Java® 3D to wake other Behavior objects and modify node values within the scene graph, change the internal state of the behavior, specify its next wakeup conditions, and exit. It is allowed to a Behavior object to change its next trigger event. The processStimulus() method, typically, manipulates scene graph elements, as long as the associated capabilities bits are set accordingly. For example, a Behavior node can be used to repeatedly modify a TransformGroup node in order to animate the associated with the TransformGroup node objects.

The amount of work done in a processStimulus() method should be limited since the method may lower the frame rate of the renderer. Java® 3D assumes that Behavior methods run to completion and if necessary they spawn threads.

The application must provide the Behavior object with references to those scene graph elements that the Behavior object will manipulate. This is achieved by providing those references as arguments to the constructor of the behavior when the Behavior object is created. Alternatively, the Behavior object itself can obtain access to the relevant scene graph elements either when Java® 3D invokes its initialize() method or each time Java® 3D invokes its processStimulus() method. Typically, the application provides references to the scene graph objects that a behavior should be able to access as arguments to its constructor when the Behavior is instantiated.

Java® 3D assumes that Behavior methods always run to completion and that if needed they can spawn threads. The structure of each Behavior method consists of the following parts:

  • code to decode and extract references from the WakeupCondition enumeration that awoke the object
  • code to perform the manipulations associated with the WakeupCondition
  • code to establish new WakeupCondition for this behavior
  • a path to exit, so that execution returns to the Java® 3D behavior scheduler

The WakeupCondition class is an abstract class that specifies a single wakeup condition. It is specialized to 14 different WakeupCriterion subclasses and to 4 subclasses that can be used to create complex wakeup conditions using boolean logic combinations of individual  WakeupCriterion objects. A Behavior node provides a WakeupCondition object to the Java® 3D behavior scheduler using its wakeupOn() method. When that WakeupCondition is satisfied, while the scheduling region intersects the activation volume of a ViewPlatform node, the behavior scheduler passes that same WakeupCondition back to the Behavior via an enumeration.

Java® 3D provides the following wakeup criteria that Behavior objects can use to specify a complex WakeupCondition. All of the following are subclasses of the WakeupCriterion class, which itself is a subclass of the WakeupCondition class.

A Behavior object constructs a WakeupCriterion by providing the appropriate arguments, such as a reference to some scene graph object and a region of interest.

Multiple criteria can be combined using the following classes to form complex wakeup conditions.

  • WakeupOr: specifies any number of wakeup conditions logically ORed together
  • WakeupAnd: specifies any number of wakeup conditions logically ANDed together
  • WakeupOrOfAnds: specifies any number of OR wakeup conditions logically ANDed together
  • WakeupAndOfOr:  specifies any number of AND wakeup conditions logically ORed together

The class hierarchy of the WakeupCondition class is shown below:

The following code provides an example of setting a WakeupCondition object

    public void initialize()     
{     
WakeupCriterion criteria[] = new WakeupCriterion[2];     
criteria[0] = new WakeupOnElapsedFrames(3);     
criteria[0] = new WakeupOnElapsedTime(500);

WakeupCondition condition = new WakeupOr(criteria);     
wakeupOn(condition);     
}

Behavior node provides a WakeupCondition object to the behavior scheduler via its wakeupOn() method and the behavior scheduler provides an enumeration of that WakeupCondition. The wakeupOn() method should be called from the initialize() and processStimulus() methods, just prior of exiting these methods.

In the current Java® 3D implementation the behavior scheduler, and, therefore, the processStimulus method of the Behavior class as well, run concurrently with the rendering thread. However, a new thread will not start until both the renderer, which may be working on the previous frame, and the behavior scheduler are done.

Java® 3D guarantees that all behaviors with a WakeupOnElapsedFrames will be executed before the next frame starts rendering, i.e. the rendering thread will wait until all behaviors are done with their processStimulus methods before drawing the next frame. In addition, Java® 3D guarantees that all scene graph updates that occur from within a single Behavior object will be reflected in the same frame for consistency purposes.

Finally, Interpolator objects can be used for simple behaviors where a parameter can be varied between a starting and an ending value during a certain time interval.

Lights

Lights can be used in order to achieve higher quality and realism of the graphics. Lighting capabilities is provided by the class Light and its subclasses. All light objects have a color, an on/off state, and a bounding volume that controls their illumination range. Java3D provides the following four types of lights, which are subclasses of the class Light:

  • AmbientLight: the rays from an ambient light source object come from all directions illuminating shapes evenly
  • DirectionalLight: a directional light source object has parallel rays of light aiming at a certain direction
  • PointLight: the rays from an point light source object are emitted radially from a point to all directions
  • SpotLight: the rays from a spot light source object are emitted radially from a point to all directions but within a cone

{{< anchor “6” >}}{{< /anchor >}}6. Performance of Java® 3D

Java® 3D aims at achieving high performance by utilizing the available graphics libraries (OpenGL/Direct3D), using 3D-graphics acceleration where available, and supporting some rendering optimizations (such as scene reorganization and content culling). It is optimized for performance rather than quality of image rendering. Compilation of branch groups and utilization of capability bits enable speed optimizations. It is as fast and high-level as Open Inventor and VRML (Virtual Reality Modeling Language), while it offers the portability of Java® and the direct access and well integration with all other available Java® APIs. Java® 3D uses native code of certain libraries, such as the OpenGL, at the final steps of rendering to achieve satisfactory performance levels. A scene reorganization and a content culling may be used by the renderer to optimize rendering by following an optimal order that bypasses hidden parts of the scene.

Java® 3D rendering is tuned to the underlying hardware utilizing a wide range of hardware and software platforms. Java® 3D is scalable, taking advantage of multithreading capabilities of Java® when multiple processors are available. The availability of multiple processors is automatically utilized by its independent and asynchronous components, such as the rendering thread and the behavior scheduler, that can be assigned to different processors. Also, branches of the scene tree-structure can be manipulated independently and concurrently utilizing multithreading and multiprocessing.

A thread scheduler was implemented inside the Java® 3D, providing to the Java® 3D architects full control of all threads and eliminating the need to deal with thread priorities. the underlying architecture uses messages to propagate scnegraph changes into certain structures that are used to optimize a particular functionality. There are two structures for geometric objects. The one organizes the geometry spatially enabling spatial queries on the scene graph, such as picking, collisions, culling etc. The other structure is a state snapshot of the scene graph, known as render bin, which is associated with each view and is used by the renderer thread. There is also a structure associated with behaviors that spatially organizes behavior nodes, and a behavior scheduler thread that executes behaviors that need to be executed.

The thread scheduler is essentially in a big infinite loop implemented inside the Java® 3D. For each iteration, the thread scheduler runs each thread that needs to be run once, waiting for all threads to be completed before entering the next iteration. The behavior and the rendering threads may run once in a single iteration. The following operations are conceptually performed within this infinite loop.

while(true)     
{     
process input

if(there is a request for exit)     
break

perform any behaviors

transverse scene graph and render visible objects     
}

Whenever a node of the scene graph is modified a message is generated with a value associated with it and any state necessary to reflect the specific changes, and queued with all other messages by the thread scheduler. At each iteration the messages are processed and the various structures are updated accordingly. The update time is very fast when the messages are very simple, which is typically the case. In the current implementation the rendering thread and the behavior thread can run concurrently. In particular, the behavior scheduler and therefore the processStimulus method of a Behavior object, can run concurrently with the renderer. However, a new frame will not start until both the rendering of the previous frame and the behavior scheduler are done.

Finally it offers level-of-detail (LOD) capabilities to further improve performance using a LOD object. An LOD leaf node is an abstract class that operates on a list of Switch group nodes to select one of the children of the Switch nodes. The LOD class is extended to implement various selection criteria, such as the DistanceLOD subclass.

Topics

  1. Interfaces
  2. Exceptions and Error Handling

1. Interfaces

An interface declares a set of methods and constants, without actually providing an implementation for any of those methods. A class is said to implement an interface if it provides definitions for all of the methods declared in the interface.

Interfaces provide a way to prescribe the behavior that a class must have. In this sense, an interface bears some resemblance to an abstract class. An abstract class may contain default implementations for some of its methods; it is an incomplete class that must be specialized by subclassing. By constrast, an interface does not provide default implementations for any of the methods. It is just a way of specifying the functions that a class should contain. There is no notion of specialization through function overriding.

Some points to note about interfaces:

A class may implement more than one interface, whereas it can only extend one parent class.

An interface is treated as a reference type.

Interfaces provide a mechanism for callbacks, rather like pointers to functions in C++.

An interface can extend another interface.

Here is an example of using an interface.

import java.util.*;

interface Collection {
final int MAXIMUM = 100;            // An interface can only have constant data.
public void add(Object obj);
public void remove();
public void print();
}

class Stack implements Collection {           // A last in first out (LIFO) process.
Vector mVector;

public Stack() {
mVector = new Vector(0);                // Create an empty vector.
}

// This adds an element to the top of the stack.
public void add(Object obj) {
if (mVector.size() < MAXIMUM)     // Restrict the size of the Stack.
mVector.insertElementAt(obj, 0);
else
System.out.println(“Reached maximum size”);
}

// This removes an element from the top of the stack.
public void remove() {
mVector.removeElementAt(0);
}

// This prints out the stack in order from top to bottom.
public void print() {
System.out.println(“Printing out the stack”);
for (int i = 0; i < mVector.size(); i++)
System.out.println(mVector.elementAt(i));
}
}

class Queue implements Collection {           // A first in first out (FIFO) process.
Vector mVector;

public Queue() {
mVector = new Vector(0);                 // Create an empty vector.
}

// This adds an element to the bottom of the queue.
public void add(Object obj) {
if (mVector.size() < MAXIMUM)      // Restrict the size of the Queue.
mVector.addElement(obj);
else
System.out.println(“Reached maximum size”);
}

// This removes an element from the top of the queue.
public void remove() {
mVector.removeElementAt(0);
}

// This prints out the queue in order from top to bottom.
public void print() {
System.out.println(“Printing out the queue”);
for (int i = 0; i < mVector.size(); i++)
System.out.println(mVector.elementAt(i));
}
}
 

class Main {
public static void main(String[] args) {

// Create a stack and add some objects to it.  The function CreateSomeObjects takes a
// reference to the Collection interface as an argument, so it does not need to know anything
// about the Stack class except that it supplies all the methods that the Collection interface
// requires.  This is an example of using callbacks.
Stack s = new Stack();
CreateSomeObjects(s);

// Remove an element from the stack and then print it out.
s.remove();
s.print();         // This will print out the elements 3,7,5.
 

// Create a queue and add some objects to it.
Queue q = new Queue();
CreateSomeObjects(q);

// Remove an element from the queue and then print it out.
q.remove();
q.print();         // This will print out the elements 7,3,4.
}
 

// Create some objects and add them to a collection.  Class Integer allows us to create integer
// objects from the corresponding primitive type, int.
public static void CreateSomeObjects(Collection c) {
c.add(new Integer(5));
c.add(new Integer(7));
c.add(new Integer(3));
c.add(new Integer(4));
}
}

2. Exceptions and Error Handling

What happens when a program encounters a run-time error? Should it exit immediately or should it try to recover? The behavior that is desired may vary depending on how serious the error is. A “file not found” error may not be a reason to terminate the program, whereas an “out of memory error” may. One way to keep track of errors is to return an error code from each function. Exceptions provide an alternative way to handle errors.

The basic idea behind exceptions is as follows. Any method with the potential to produce a remediable error should declare the type of error that it can produce using the throws keyword. The basic remediable error type is class Exception, but one may be more specific about the type of exception that can be thrown e.g. IOException refers to an exception thrown during an input or output operation. When an exception occurs, we use the throw keyword to actually create the Exception object and exit the function.

Code that has the potential to produce an exception should be placed within the try block of a try-catch statement. If the code succeeds, then control passes to the next statement following the try-catch statement. If the code within the try block fails, then the code within the catch block is executed. The following example illustrates this.
 

class LetterTest {
char readLetter() throws Exception {        // Indicates type of exception thrown.
int k;

k = System.in.read();
if (k < ‘A’ || k > ‘z’) {
throw new Exception();                  // Throw an exception.
}

return (char)k;
}

public static void main(String[] args) {
LetterTest a = new LetterTest();

try {
char c = a.readLetter();
String str;
str = “Successfully read letter " + c;
System.out.println(str);
}
catch (Exception e) {                           // Handle the exception.
System.out.println(“Failed to read letter.”);
}
}
}
 

Note: in addition to the Exception class, Java® also provides an Error class, which is reserved for those kinds of problems that a reasonable program should not try to catch.

Topics

  1. Introduction
  2. The BeanBox
  3. Creating a Java® Bean
  4. Support for Properties and Events

Java® Beans trail in the Java® Tutorial- a good introduction to Java® Beans.
Java® Beans Development Kit (BDK)- provides a basic development support tool (called the BeanBox) as well as several examples of Java® Bean components. This link also provides links to various commercial development environments for Java® Beans.
Java® Beans API- various interfaces, classes and exception types that you will encounter when developing Java® Beans.

1. Introduction

A Java® Bean is a reusable software component that can be manipulated visually in an application builder tool. The idea is that one can start with a collection of such components, and quickly wire them together to form complex programs without actually writing any new code.

Software components must, in general, adopt standard techniques for interacting with the rest of the world. For example, all GUI components inherit the java.awt.Component class, which means that one can rely on them to have certain standard methods like paint(), setSize(), etc. Java® Beans are not actually required to inherit a particular base class or implement a particular interface. However, they do provide support for some or all of the following key features:

  • Support for introspection. Introspection is the process by which an application builder discovers the properties, methods and events that are associated with a Java® Bean. 
  • Support for properties. These are basically member variables that control the appearance or behavior of the Java® Bean. 
  • Support for customization of the appearance and behavior of a Java® Bean.
  • Support for events. This is a mechanism by which Java® Beans can communicate with one another.
  • Support for persistent storage. Persistence refers to the abilility to save the current state of an object, so that it can be restored at a later time.

2. The BeanBox

This is a basic tool that Sun provides for testing Java® Beans. To run the BeanBox, your computer needs to have access to a BDK installation. To run the BeanBox, go to the beans/beanbox subdirectory and then type run. This will bring up three windows:

  • The ToolBox window gives you a palette of sample Java® Beans to choose from.
  • The BeanBox window is a container within which you can visually wire beans together.
  • The Properties window allows you to edit the properties of the currently selected Java® Bean.

Try a simple example: choose Juggler bean from the ToolBox and drop an instance in the BeanBox window. Also create two instances of OurButton. Edit the labels of the buttons to read start and stop using the Properties window. Now wire the start button to the juggler as follows. Select the start button, then go to Edit | Events | action | actionPerformed. Connect the rubber band to the juggler. You will now see an EventTargetDialog box with a list of Juggler methods that could be invoked when the start button is pressed (these are the methods that either take an ActionEvent as an argument or have no arguments at all.) Choose startJuggling as the target method and press OK. The BeanBox now generates an adaptor class to wire the start button to the juggler. Wire the stop button to the juggler’s stopJuggling method in a similar manner.

Now that the program has been designed, you can run it within the BeanBox. Simply press the start button to start juggling and press the stop button to stop juggling. If you wish, you can turn your program into an applet by choosing File | MakeApplet in the BeanBox. This will automatically generate a complete set of files for the applet, which can be run in the appletviewer. (Do not expect current versions of Netscape® and Internet Explorer to work with this applet.)

Let’s take a closer look at how the BeanBox works. On start up, it scans the directory beans/jars for files with the .jar extension that contain Java® Beans. These beans are displayed in the ToolBox window, from where they can be selected and dropped into the BeanBox window. Next, we edited the labels of the two instances of OurButton. The BeanBox determined that OurButton has a member named label by looking for setter and getter methods that follow standard naming conventions called design patterns. If you look at the source code in beans\demo\sunw\demo\buttons\OurButton.java, you will see that OurButton has two methods named

     public void setLabel(String newLabel) {

}

     public String getLabel() {

}

Design patterns are an implicit technique by which builder tools can introspect a Java® Bean. There is also an explicit technique for exposing properties, methods and events. This involves writing a bean information class, which implements the BeanInfo interface.

When we wired the start button to the juggler, the BeanBox set up the juggler to respond to action events generated by the start button. The BeanBox again used design patterns to determine the type of events that can be generated by an OurButton object. The following design patterns indicate that OurButton is capable of firing ActionEvents.

public synchronized void addActionListener(ActionListener l) {

}

public synchronized void removeActionListener(ActionListener l) {

}

By choosing Edit | Events | action | actionPerformed to connect the start button to the juggler, we were really registering an ActionListener with the start button. The Juggler bean itself is does not implement the ActionListener interface. Instead the BeanBox generated an event hookup adaptor, which implements ActionListener and simply calls the juggler’s startJuggling method in its actionPerformed method:

// Automatically generated event hookup file.

package tmp.sunw.beanbox;
import sunw.demo.juggler.Juggler;
import java.awt.event.ActionListener;
import java.awt.event.ActionEvent;

public class ___Hookup_1474c0159e implements
java.awt.event.ActionListener, java.io.Serializable {

public void setTarget(sunw.demo.juggler.Juggler t) {
target = t;
}

public void actionPerformed(java.awt.event.ActionEvent arg0) {
target.startJuggling(arg0);
}

private sunw.demo.juggler.Juggler target;
}

A similar event hookup adaptor was generated when we wired the stop button to the juggler’s stopJuggling method.

Why not make Juggler implement the ActionListener interface directly? This is mainly a matter of convenience. Suppose that Juggler implemented ActionListener and that it was registered to receive _ActionEvent_s from both the start button and the stop button. Then the Juggler’s actionPerformed method would need to examine incoming events to determine the event source, before it could know whether to call startJuggling or stopJuggling.

3. Creating a Java® Bean

This example illustrates how to create a simple Java® Bean. Java Bean classes must be made serializable so that they support persistent storage. To make use of the default serialization capabilities in Java®, the class needs to implement the Serializable interface or inherit a class that implements the Serializable interface. Note that the Serializable interface does not have any methods. It just serves as a flag to say that the designer has tested the class to make sure it works with default serialization. Here is the code:

SimpleBean.java

import java.awt.*;
import java.io.Serializable;

public class SimpleBean extends Canvas implements Serializable {
//Constructor sets inherited properties
public SimpleBean() {
setSize(60,40);
setBackground(Color.red);
}
}

Since this class extends a GUI component, java.awt.Canvas, it will be a visible Java® Bean. Java® Beans may also be invisible.

Now the Java® Bean must be compiled and packaged into a JAR file. First run the compiler:

    javac SimpleBean.java

Then create a manifest file

manifest.tmp

    Name: SimpleBean.class
Java-Bean: True

Finally create the JAR file:

    jar cfmv SimpleBean.jar manifest.tmp SimpleBean.class

The JAR file can now be placed in the beans/jars so that the BeanBox will find it on startup, or it can be loaded subsequently by choosing File | LoadJar.

4.  Support for Properties and Events

This example builds on the SimpleBean class. It illustrates how to add customizable properties to a Java® Bean and how to generate and receive property change events.

SimpleBean.java

    import java.awt.*;
import java.io.Serializable;
import java.beans.*;

public class SimpleBean extends Canvas implements Serializable,
java.beans.PropertyChangeListener {

// Constructor sets inherited properties
public SimpleBean() {
setSize(60,40);
setBackground(Color.red);
}

// This section illustrates how to add customizable properties to the Java Bean.  The names
// of the property setter and getter methods must follow specific design patterns that allow
// the BeanBox (or builder tool) to determine the name of the property variable upon
// introspection.

private Color beanColor = Color.green;

public void setBeanColor(Color newColor) {
Color oldColor = beanColor;
beanColor = newColor;
repaint();

// This relates to bound property support (see below).
changes.firePropertyChange(“beanColor”, oldColor, newColor);
}

public Color getBeanColor() {
return beanColor;
}

public void paint(Graphics g) {
g.setColor(beanColor);
g.fillRect(20,5,20,30);
}

// This section illustrates how to implement bound property support.  Bound property
// support allows other objects to respond when a property change occurs in this Java
// Bean.  Remember that each property setter method must fire a property change event,
// so that registered listeners can be properly notified.  The addPropertyChangeListener
// and removePropertyChangeListener methods follow design patterns that indicate the
// ability of this Java Bean to generate property change events.  As it happens, these
// methods overide methods of the same names, which are inherited through java.awt.Canvas.

private PropertyChangeSupport changes = new PropertyChangeSupport(this);

public void addPropertyChangeListener(PropertyChangeListener l) {
changes.addPropertyChangeListener(l);
}

public void removePropertyChangeListener(PropertyChangeListener l) {
changes.removePropertyChangeListener(l);
}

// This section illustrates how to implement a bound property listener, which will allow this
// Java Bean to register itself to receive property change events fired by other objects.
// Registration simply involves making a call to the other object’s addPropertyChangeListener
// method with this Java Bean as the argument.  If you are using the BeanBox, however, you
// will typically use the event hookup adaptor mechanism to receive the events.  In this case,
// you can set the target method to be the propertyChange method.  (Another word about the
// BeanBox: the Edit | bind property option is a useful way to make a property change in one
// object automatically trigger a property change in another object.  In this case, the BeanBox
// will invoke the correct property setter using code in sunw/beanbox/PropertyHookup.java.
// An adaptor class will not be generated in this case.)

public void propertyChange(PropertyChangeEvent evt) {
String propertyName = evt.getPropertyName();
System.out.println(“Received property change event " + propertyName);
}
}

Topics

  1. Java® RMI
  2. The RMI Client
  3. The RMI Server
  4. How to Compile and Run

1. Java® RMI

(Ref: Just Java® 1.2 Ch 16)

The Java® Remote Method Invokation (RMI) framework provides a simple way for Java® programs to communicate with each other. It allows a client program to call methods that belong to a remote object, which lives on a server located elsewhere on the network. The client program can pass arguments to the methods of the remote object and obtain return values, as seamlessly as invoking a method of a local object.

The operation of a remote method call is as follows. The client program actually calls into a dummy method, called a stub, which resides locally. The stub gets the function arguments, serializes them and then sends them over the network to the server. On the server side, a corresponding bare-bones method (called a skeleton) deserializes the argument objects and passes them onto the real server method. This process is reversed in order to send the result back to the client.

The Interface to the Remote Object

Both the client and the server must agree on a common interface, which describes the methods that are to be invoked on the server. For example:

WeatherIntf.java

// An interface that describes the service we will be accessing remotely.
public interface WeatherIntf extends java.rmi.Remote {
public String getWeather() throws java.rmi.RemoteException;
}

2. The RMI Client

Here is the example code for the client. The call to Naming.lookup() returns a reference to a Remote object that is available on the server (localhost in this case) under the service name /WeatherServer. Before we can access its methods, the Remote object must be cast to the appropriate interface type (WeatherIntf in this case).

RMIdemo.java

import java.rmi.*;

public class RMIdemo {
public static void main(String[] args) {
try {
// Obtain a reference to an object that lives remotely on a server.
// The object is published under the service name WeatherServer and
// it is known to implement interface WeatherIntf.  We cast to this
// interface in order to access the object’s methods.
Remote robj = Naming.lookup("//localhost/WeatherServer");
WeatherIntf weatherServer = (WeatherIntf)robj;

// Access the services provided by the remote object.
while (true) {
String forecast = weatherServer.getWeather();
System.out.println(“The weather will be " + forecast);
Thread.sleep(500);
}
}
catch (Exception e) {
System.out.println(e.getMessage());
}
}
}

3. The RMI Server

Here is the example code for the server. The server makes its services available to the client by registering them with the RMI registry using a call to Naming.rebind(). In this code, the server side object is made available under the name /WeatherServer.

WeatherServer.java

import java.rmi.*;
import java.rmi.server.UnicastRemoteObject;

public class WeatherServer extends UnicastRemoteObject implements WeatherIntf {
public WeatherServer() throws java.rmi.RemoteException {
super();
}

// The method that will be invoked by the client.
public String getWeather() throws RemoteException {
return Math.random() > 0.5 ? “sunny” : “rainy”;
}

public static void main(String[] args) {
// We need to set a security manager since this is a server.
// This will allow us to customize access priviledges to
// remote clients.
System.setSecurityManager(new RMISecurityManager());

try {
// Create a WeatherServer object and announce its service to the
// registry.
WeatherServer weatherServer = new WeatherServer();
Naming.rebind("/WeatherServer”, weatherServer);
}
catch (Exception e) {
System.out.println(e.getMessage());
}
}
}

4. How to Compile and Run

  • Compile all three .java files using javac:
    javac *.java 

  • Generate the stub and the skeleton classes for the server:
     rmic WeatherServer 

  • Put the class files in a location that the JDK knows about e.g. the current directory or $JAVAHOME/jre/classes

  • Start the RMI registry:
    rmiregistry 

  • Create a permissions file for the server: 
    permit

        grant {
    permission java.net.SocketPermission “*”, “connect”;
    permission java.net.SocketPermission “*”, “accept”;
    // Here is how you could set file permissions:
    // permission java.io.FilePermission “/tmp/*”, “read”;
    };

  • Start the server using the security policy prescribed by the permissions file: 
     java -Djava.security.policy=permit WeatherServer  

  • Start the client:
    java RMIdemo

The client will now communicate with the server to find out the current weather.


Topics

  1. Member Access
  2. A Linked List Class

1. Member Access

(Ref. Lippman 13.1.3, 17.2, 18.3)

Types of Access Privilege

TYPE OF MEMBER ACCESSIBLE IN CLASS DEFINITION ACCESSIBLE BY OBJECTS
private yes no
protected yes no
public yes yes

Member Access Under Inheritance

INHERITANCE ACCESS SPECIFIER TYPE OF MEMBER IN BASE CLASS ACCESS LEVEL IN FIRST DERIVED CLASS ACCESSIBLE BY FIRST DERIVED CLASS DEFINITION ACCESSIBLE BY FIRST DERIVED CLASS OBJECTS
private private
protected
public
-
private
private
no
yes
yes
no
no
no
protected private
protected
public
-
protected
protected
no
yes
yes
no
no
no
public private
protected
public
-
protected
public
no
yes
yes
no
no
yes

Key Points

  • Private members are only accessible within the class that they are declared. They are not accessible by derived class definitions.
  • Protected members are not accessible by objects. They are always accessible by a first level derived class.
  • The inheritance access specifier places an upper limit on the access level of inherited members in the derived class.

2. A Linked List Class

Class Declaration

list.h

#ifndef _LIST_H_
#define _LIST_H_

#include <iostream.h>
#ifndef TRUE
#define TRUE 1
#endif                          // TRUE
#ifndef FALSE
#define FALSE 0
#endif                          // FALSE

// Generic list element. ListElement is an abstract class which will be
// subclassed by users of the List class in order to create different types
// of list elements.
class ListElement {
private:
ListElement *mpNext;        // Pointer to next element in the list.

public:
ListElement() {mpNext = NULL;}
virtual ~ListElement() {}

// A pure virtual method which returns some measure of the element’s
// importance for purposes of ordering the list. The implementation
// will be provided by individual subclasses. The list will be ordered
// from most significant (at the head) to least significant.
virtual float ElementValue() = 0;

// A pure virtual method which prints out the contents of the list element.
// Implementation will be provided by individual subclasses.
virtual void print() = 0;

// Grant special access privilege to class list.
friend class List;

// An operator<< which prints out a list.
friend ostream& operator<<(ostream &os, const List& list);
};
 

// A linked list class.
class List {
private:
ListElement *mpHead;        // Pointer to the first element in the list.

public:
// Create an empty list.
List();

// Destroy the list, including all of its elements.
~List();

// Add an element to the list. Returns TRUE if successful.
int AddElement(ListElement *pElement);

// Remove an element from the list. Returns TRUE if successful.
int RemoveElement(ListElement *pElement);

// Return a pointer to the largest element. Does not remove it from the list.
ListElement *GetLargest();

// Return a pointer to the smallest element. Does not remove it from the list.
ListElement *GetSmallest();

// An operator<< which prints out the entire list.
friend ostream& operator<<(ostream &os, const List& list);
};

#endif                          // _LIST_H_

Class Definition

list.C

#include “list.h”

// Create an empty list.
List::List() {
mpHead = NULL;
}

// Destroy the list, including all of its elements.
List::~List() {
ListElement *pCurrent, *pNext;

for (pCurrent = mpHead; pCurrent ! = NULL; pCurrent = pNext) {
pNext = pCurrent->mpNext;
delete pCurrent;
}
}

// Add an element to the list. Returns TRUE if successful.
int List::AddElement(ListElement *pElement) {
ListElement *pCurrent, *pPrevious;
float fValue = pElement->ElementValue();

pPrevious = mpHead;
for (pCurrent = mpHead; pCurrent != NULL; pCurrent = pCurrent->mpNext) {
if (fValue > pCurrent->ElementValue()) {
// Insert the new element before the current element.
pElement->mpNext = pCurrent;
if (pCurrent == mpHead)
mpHead = pElement;
else
pPrevious->mpNext = pElement;
return TRUE;
}
pPrevious = pCurrent;
}

// We have reached the end of the list.
if (mpHead == NULL)
mpHead = pElement;
else
pPrevious->mpNext = pElement;
pElement->mpNext = NULL;

return TRUE;
}

// Remove an element from the list. Returns TRUE if successful.
int List::RemoveElement(ListElement *pElement) {
ListElement *pCurrent, *pPrevious;

pPrevious = mpHead;
for (pCurrent = mpHead; pCurrent != NULL; pCurrent = pCurrent->mpNext) {
if (pCurrent == pElement) {
if (pCurrent == mpHead)
mpHead = pCurrent->mpNext;
else
pPrevious->mpNext = pCurrent->mpNext;
delete pCurrent;
return TRUE;
}
pPrevious = pCurrent;
}

// The given element was not found in the list.
return FALSE;
}

// Return a pointer to the largest element. Does not remove it from the list.
ListElement *List::GetLargest() {
return mpHead;
}

// Return a pointer to the smallest element. Does not remove it from the list.
ListElement *List::GetSmallest() {
ListElement *pCurrent, *pPrevious;

pPrevious = mpHead;
for (pCurrent = mpHead; pCurrent != NULL; pCurrent = pCurrent->mpNext) {
pPrevious = pCurrent;
}

return pPrevious;
}

// An operator<< which prints out the entire list.
ostream& operator<<(ostream &os, const List& list) {
ListElement *pCurrent;

for (pCurrent = list.mpHead; pCurrent ! = NULL;
pCurrent = pCurrent->mpNext) {
// Print out the contents of the current list element. Since the
// print method is declared to be virtual in the ListElement class,
// the actual print method to be used will be determined at run time.
pCurrent->print();
}
}
 

Using the Linked List Class

shapes.h

// Some shapes that we may wish to store in a linked list.
// We will order the shape objects according to their areas.

#ifndef _SHAPE_H_
#define _SHAPE_H_

#define PI 3.14159

#include “list.h”

class Triangle : public ListElement {
private:
float mfBase, mfHeight;

public:
// Unless we provide an explicit base class initializer, the base
// class will be initialized using its default constructor.
Triangle() {mfBase = mfHeight = 0.0;}
Triangle(float fBase, float fHeight) {mfBase = fBase; mfHeight = fHeight;}
~Triangle() {}
float ElementValue() {return (mfBase * mfHeight / 2);}
void print() {cout << “Triangle: area = " << ElementValue() << endl;}
};
 

class Rectangle : public ListElement {
private:
float mfBase, mfHeight;

public:
// Unless we provide an explicit base class initializer, the base
// class will be initialized using its default constructor.
Rectangle() {mfBase = mfHeight = 0.0;}
Rectangle(float fBase, float fHeight) {mfBase = fBase; mfHeight = fHeight;}
~Rectangle() {}
float ElementValue() {return (mfBase * mfHeight);}
void print() {cout << “Rectangle: area = " << ElementValue() << endl;}
};
 
 

class Circle : public ListElement {
private:
float mfRadius;

public:
// Unless we provide an explicit base class initializer, the base
// class will be initialized using its default constructor.
Circle() {mfRadius = 0.0;}
Circle(float fRadius) {mfRadius = fRadius;}
~Circle() {}
float ElementValue() {return (PI * mfRadius * mfRadius);}
void print() {cout << “Circle: area = " << ElementValue() << endl;}
};
#endif                          // _SHAPE_H_
 

list_test.C

#include “shapes.h”

main() {
List list;
ListElement *p;

p = new Triangle(4, 3);
list.AddElement(p);
p = new Rectangle(2, 1);
list.AddElement(p);
p = new Circle(2);
list.AddElement(p);
p = new Triangle(3, 2);
list.AddElement(p);
p = new Circle(1);
list.AddElement(p);

cout << list << endl;

list.RemoveElement(list.GetLargest());

cout << list << endl;

list.RemoveElement(list.GetSmallest());

cout << list << endl;
}

How to Use make

Introduction

make is a command generator which generates a sequence of commands for execution by the UNIX® shell. These commands usually relate to the maintenance of a set of files in a software development project. We will use make to help us organize our C++ and C source code files during the compilation and linking process. In particular, make can be used to sort out the dependency relations among the various source files, object files and executables and to determine exactly how the object files and executables will be produced.

Invoking make from the Command Line

make may be invoked from the command line by typing:

make -f make filename program

Here, program is the name of the target i.e. the program to be made. makefilename is a description file which tells the make utility how to build the target program from its various components. Each of these components could be a target in itself. make would therefore have to build these targets, using information in the description file, before program can be made. program need not necessarily be the highest level target in the hierarchy, although in practice it often is.

It is not always necessary to specify the name of the description file when invoking make. For example,

make program

would cause make to look in the current directory for a default description file named makefile or Makefile, in that order.

Furthermore, it is not even necessary to specify the name of the final target. Simply typing

make

will build the first target found in the default description file, together with all of its components. On the other hand, it is also possible to specify multiple targets when invoking make.

make Description Files (makefiles)

Here is an example of a simple makefile:

program: main.o iodat.o
           cc -o program main.o iodat.o
main.o: main.c
           cc -c main.c
iodat.o: iodat.c
           cc -c iodat.c

Each entry consists of a dependency line containing a colon, and one or more command lines each starting with a tab. Dependency lines have one or more targets to the left of the colon. To the right of the colon are the component files on which the target(s) depend.

A command line will be executed if any target listed on the dependency line does not exist, or if any of the component files are more recent than a target.

Here are some points to remember:

  • Comments start with a pound sign (#).
  • Continuation of a line is denoted by a backslash (\).
  • Lines containing equals signs (=) are macro definitions (see next section).
  • Each command line is typically executed in a separate Bourne shell i.e. _sh_1.

To execute more than one command line in the same shell, type them on the same line, separated by semicolons. Use a \ to continue the line if necessary. For example,

program: main.o iodat.o
          cd newdir; \
          cc -o program main.o iodat.o

would change to the directory newdir before invoking cc. (Note that executing the two commands in separate shells would not produce the required effect, since the cd command is only effective within the shell from which it was invoked.)

The Bourne shell’s pattern matching characters maybe used in command lines, as well as to the right of the colon in dependency lines e.g.

program: *.c
           cc -o program *.c

Macros

Macro Definitions in the Description File

Macro definitions are of the form:

name = string

Subsequent references to $(name) or ${name} are then interpreted as string. Macros are typically grouped together at the beginning of the description file. Macros which have no string to the right of the equals sign are assigned the null string. Macros may be included within macro denitions, regardless of the order in which they are defined.

Here is an example of a macro:

_CC = /mit/gnu/arch/sun4x 57/bin/g++
program: program.C
           ${CC} -o program program.C

_
Shell Environment Variables

Shell variables that were part of the environment before make was invoked are available as macros within make. Within a make description file, however, shell environment variables must be surrounded by parentheses or braces, unless they consist of a single character. For example, ${PWD} may be used in a description file to refer to the current working directory.

Command Line Macro Definitions

Macros can be defined when invoking make e.g.

make program CC=/mit/gnu/arch/sun4x_57/bin/g++

Internal Macros

make has a few predefined macros:

  1. $? evaluates to the list of components that are younger than the current target. Can only be used in description file command lines.
  2. $@ evaluates to the current target name. Can only be used in description file command lines.
  3. $$@ also evaluates to the current target name. However, it can only be used on dependency lines.

Example

PROGS = prog1 prog2 prog3
${PROGS}: $$@.c
           cc -o $@ $?

This will compile the three files prog1.c, prog2.c and prog3.c, unless any of them have already been compiled. During the compilation process, each of the programs becomes the current target in turn. In this particular example, the same effect would be obtained if we replaced the $? by $@.c

Order of Priority of Macro Assignments

The following is the order of priority of macro assignments, from least to greatest:

  1. Internal (default) macro definitions.
  2. Shell environment variables.
  3. Description file macro definitions.
  4. Command line macro definitions.

Items 2. and 3. can be interchanged by specifying the -e option to make.

Macro String Substitution

String substitutions may be performed on all macros used in description file shell commands. However, substitutions occur only at the end of the macro, or immediately before white space. The following example illustrates this:

               LETTERS = abcxyz xyzabc xyz
               print:
                         echo $(LETTERS:xyz=def)

This description file will produce the output

               abcdef xyzabc def

Suffix Rules

The existence of naming and compiling conventions makes it possible to considerably simplify description files. For example, the C compiler requires that C source files always have a .c suffix. Such naming conventions enable make to perform many tasks based on suffix rules. make provides a set of default suffix rules. In addition, new suffix rules can be defined by the user.

For example, the description file on page 2 can be simplified to

program: main.o iodat.o
            cc -o program main.o iodat.o

make will use the following default macros and suffix rules to determine how to build the components main.o and iodat.o.

CC = cc
CFLAGS = -O
.SUFFIXES: .o .c
.c.o:
         ${CC} ${CFLAGS} $<

The entries on the .SUFFIXES line represent the suffixes which make will consider significant. Thus, in building iodat.o from the above description file, make looks for a user-specified dependency line containing iodat.o as a target. Finding no such dependency, make notes that the .o suffix is significant and therefore it looks for another file in the current directory which can be used to make iodat.o. Such a file must

  • have the same name (apart from the suffix) as iodat.o.

  • have a significant suffix.

  • be able to be used to make iodat.o according to an existing suffix rule.

make then applies the above suffix rule which specifies how to build a .o file from a .c file. The $< macro evaluates to the component that triggered the suffix rule i.e. iodat.c.

After the components main.o and iodat.o have been updated in this way (if necessary), the target program will be built according to the directions in the description file.

Internal Macros in Suffix Rules

The following internal macros can only be used in suffix rules.

  1. $< evaluates to the component that is being used to make the target.

  2. $* evaluates to the filename part (without any suffix) of the component that is being used to make the target.

Note that the $? macro cannot occur in suffix rules. The $@ macro, however, can be used.

Null Suffixes

Files with null suffixes (no suffix at all) can be made using a suffix rule which has only a single suffix e.g.

.c:
          ${CC} -o $@ $<

This suffix rule will be invoked to produce an executable called program from a source file program.c, if the description file contains a line of the form.

          program:

Note that in this particular situation it would be incorrect to specify that program depended on program.c, because make would then consider the command line to contain a null command and would therefore not invoke the suffix rule. This problem does not arise when building a .o file from a .c file using suffix rules. A .o file can be specified to depend on a .c file (and possibly some additional header files) because of the one-to-one relationship that exists betweeen .o and .c files.

Writing Your Own Suffix Rules

Suffix rules and the list of significant suffixes can be redefined. A line containing .SUFFIXES by itself will delete the current list of significant suffixes e.g.

.SUFFIXES:
.SUFFIXES: .o .c
.c.o:
             ${CC} -o $@ $<


References

[1] Talbot, S. “Managing Projects with Make.” O’Reilly & Associates, Inc.

Topics

  1. Threads, Processes and Multitasking
  2. How to Create Threads
  3. The LifeCycle of a Thread
  4. Animations

1. Threads, Processes and Multitasking

Multitasking is the ability of a computer’s operating system to run several programs (or processes) concurrently on a single CPU. This is done by switching from one program to another fast enough to create the appearance that all programs are executing simultaneously. There are two types of multitasking:

Preemptive multitasking. In preemptive multitasking, the operating system decides how to allocate CPU time slices to each program. At the end of a time slice, the currently active program is forced to yield control to the operating system, whether it wants to or not. Examples of operating systems that support premptive multitasking are Unix®, Windows® 95/98, Windows® NT and the planned release of Mac® OS X.

Cooperative multitasking. In cooperative multitasking, each program controls how much CPU time it needs. This means that a program must cooperate in yielding control to other programs, or else it will hog the CPU. Examples of operating systems that support cooperative multitasking are Windows® 3.1 and Mac® OS 8.5.

Multithreading extends the concept of multitasking by allowing individual programs to perform several tasks concurrently. Each task is referred to as a thread and it represents a separate flow of control. Multithreading can be very useful in practical applications. For example, if a web page is taking too long to load in a web browser, the user should be able interrupt the loading of the page by clicking on the stop button. The user interface can be kept responsive to the user by using a separate thread for the network activity needed to load the page.

What then is the difference then between a process and a thread? The answer is that each process has its own set of variables, whereas threads share the same data and system resources. A multithreaded program must therefore be very careful about the way that threads access and modify data, or else unpredictable behavior may occur.

2. How to Create Threads

(Ref. Java® Tutorial)

We can create a new thread using the Thread class provided in the java.lang package. There are two ways to use the Thread class.

  • By creating a subclass of Thread.
  • By writing a class that implements the Runnable interface.

Subclassing the Thread class

In this approach, we create a subclass of the Thread class. The Thread class has a method named run(), which we can override in our subclass. Our implementation of the run() method must contain all code that is to be executed within the thread.

class MyClass extends Thread {
// …

public void run() {
// All code to be executed within the thread goes here.
}
}
 

We can create a new thread by instantiating our class, and we run it by calling the start() method that we inherited from class Thread.

MyClass a = new MyClass();
a.start();

This approach for creating a thread works fine from a technical standpoint. Conceptually, however, it does not make that much sense to say that MyClass “is a” Thread. All that we are really interested in doing is to provide a run() method that the Thread class can execute. The next approach is geared to do exactly this.

Implementing the Runnable Interface

In this approach, we write a class that implements the Runnable interface. The Runnable interface requires us to implement a single method named run(), within which we place all code that is to be executed within the thread.

class MyClass implements Runnable {
// …

public void run() {
// All code to be executed within the thread goes here.
}
}
 

We can create a new thread by creating a Thread object from an object of type MyClass. We run the thread by calling the Thread object’s start() method.

MyClass a = new MyClass;
Thread t = new Thread(a);
t.start();

3. The LifeCycle of a Thread

(Ref. Java® Tutorial)

A thread can be in one of four states during its lifetime:

  • new - A new thread is one that has been created (using the new operator), but has not yet been started.

  • runnable - A thread becomes runnable once its start() method has been invoked. This means that the code in the run() method can execute whenever the thread receives CPU time from the operating system.

  • blocked - A thread can become blocked if one of the following events occurs:

    • The thread’s sleep() method is invoked. In this case, the thread remains blocked until the specified number of milliseconds elapses.
    • The thread calls the wait() method of an object. In this case, the thread remains blocked until either the object’s notify() method or its notifyAll() method is called from another thread. The calls to wait(), notify() and notifyAll() are typically found within synchronized methods of the object.
    • The thread has blocked on an input/output operation. In this case, the thread remains blocked until the i/o operation has completed.
  • dead - A thread typically dies when the run() method has finished executing.

Note: The following methods in the java.lang.Thread class should no longer be used, since they can lead to unpredicable behavior: stop(), suspend() and resume().

The following example illustrates various thread states. The main thread in our program creates a new thread, Thread-0. It then starts Thread-0, thereby making Thread-0 runnable so that it prints out an integer every 500 milliseconds. We call the sleep() method to enforce the 500 millisecond delay between printing two consecutive integers. In the meantime, the main thread proceeds to print out an integer every second only. The output from the program shows that the two threads are running in parallel. When the main thread finishes its for loop, it stops Thread-0.

We maintain a variable, myThread, which initially references Thread-0. This variable is polled by the run() method to make sure that it is still referencing Thread-0. All we have to do to stop the thread is to set myThread to null. This will cause the run() method to terminate normally.

class MyClass implements Runnable {
int i;
Thread myThread;

public MyClass() {
i = 0;
}

// This will terminate the run() method.
public void stop() {
myThread = null;
}

// The run() method simply prints out a sequence of integers, one every half second.
public void run() {
// Get a handle on the thread that we are running in.
myThread = Thread.currentThread();

// Keep going as long as myThread is the same as the current thread.
while (Thread.currentThread() == myThread) {
System.out.println(Thread.currentThread().getName() + “: " + i);
i++;

try {
Thread.sleep(500); // Tell the thread to sleep for half a second.
}
catch (InterruptedException e) {}
}
}
}
 

class Threadtest {
public static void main(String[] args) {
MyClass a = new MyClass();
Thread t = new Thread(a);

// Start another thread.  This thread will run in parallel to the main thread.
System.out.println(Thread.currentThread().getName() + “: Starting a separate thread”);
t.start();

// The main thread proceeds to print out a sequence of integers of its own, one every second.
for (int i = 0; i < 6; i++) {
System.out.println(Thread.currentThread().getName() + “: " + i);
// Tell the main thread to sleep for a second.
try {
Thread.sleep(1000);
}
catch (InterruptedException e) {}
}

// Stop the parallel thread.  We do this by setting myThread to null in our runnable object.
System.out.println(Thread.currentThread().getName() + “: Stopping the thread”);
a.stop();
}
}

4. Animations

Here is an example of a simple animation. We have used a separate thread to control the motion of a ball on the screen.

anim.html

<HTML>
<BODY>
<APPLET CODE=“Animation.class” WIDTH=300 HEIGHT=400>
</APPLET>
</BODY>
 

Animation.java

import java.awt.*;
import java.awt.event.*;
import javax.swing.*;

public class Animation extends JApplet implements Runnable, ActionListener {
int miFrameNumber = -1;
int miTimeStep;
Thread mAnimationThread;
boolean mbIsPaused = false;
Button mButton;
Point mCenter;
int miRadius;
int miDX, miDY;

public void init() {
// Make the animation run at 20 frames per second.  We do this by
// setting the timestep to 50ms.
miTimeStep = 50;

// Initialize the parameters of the circle.
mCenter = new Point(getSize().width/2, getSize().height/2);
miRadius = 15;
miDX = 4;  // X offset per timestep.
miDY = 3;  // Y offset per timestep.

// Create a button to start and stop the animation.
mButton = new Button(“Stop”);
getContentPane().add(mButton, “North”);
mButton.addActionListener(this);

// Create a JPanel subclass and add it to the JApplet.  All drawing
// will be done here, do we must write the paintComponent() method.
// Note that the anonymous class has access to the private data of
// class Animation, because it is defined locally.
getContentPane().add(new JPanel() {
public void paintComponent(Graphics g) {
// Paint the background.
super.paintComponent(g);

// Display the frame number.
g.drawString(“Frame " + miFrameNumber, getSize().width/2 - 40,
getSize().height - 15);

// Draw the circle.
g.setColor(Color.red);
g.fillOval(mCenter.x-miRadius, mCenter.y-miRadius, 2*miRadius,
2*miRadius);
}
}, “Center”);
}

public void start() {
if (mbIsPaused) {
// Don’t do anything.  The animation has been paused.
} else {
// Start animating.
if (mAnimationThread == null) {
mAnimationThread = new Thread(this);
}
mAnimationThread.start();
}
}

public void stop() {
// Stop the animating thread by setting the mAnimationThread variable
// to null.  This will cause the thread to break out of the while loop,
// so that the run() method terminates naturally.
mAnimationThread = null;
}

public void actionPerformed(ActionEvent e) {
if (mbIsPaused) {
mbIsPaused = false;
mButton.setLabel(“Stop”);
start();
} else {
mbIsPaused = true;
mButton.setLabel(“Start”);
stop();
}
}

public void run() {
// Just to be nice, lower this thread’s priority so it can’t
// interfere with other processing going on.
Thread.currentThread().setPriority(Thread.MIN_PRIORITY);

// Remember the starting time.
long startTime = System.currentTimeMillis();

// Remember which thread we are.
Thread currentThread = Thread.currentThread();

// This is the animation loop.
while (currentThread == mAnimationThread) {
// Advance the animation frame.
miFrameNumber++;

// Update the position of the circle.
move();

// Draw the next frame.
repaint();

// Delay depending on how far we are behind.
try {
startTime += miTimeStep;
Thread.sleep(Math.max(0,
startTime-System.currentTimeMillis()));
}
catch (InterruptedException e) {
break;
}
}
}

// Update the position of the circle.
void move() {
mCenter.x += miDX;
if (mCenter.x - miRadius < 0 ||
mCenter.x + miRadius > getSize().width) {
miDX = -miDX;
mCenter.x += 2*miDX;
}

mCenter.y += miDY;
if (mCenter.y - miRadius < 0 ||
mCenter.y + miRadius > getSize().height) {
miDY = -miDY;
mCenter.y += 2*miDY;
}
}
}

Contents

  1. Local and Global Variables
  2. Reference Types
  3. Functions in C++ 
  4. Basic Input and Output
  5. Creating and Destroying Objects - Constructors and Destructors

1. Local and Global Variables

(Ref. Lippman 8.1-8.3)

Local variables are objects that are only accessible within a single function (or a sub-block within a function.) Global variables, on the other hand, are objects that are generally accessible to every function in a program. It is possible, though potentially confusing, for a local object and a global object to share the same name. In following example, the local object x shadows the object x in the global namespace. We must therefore use the global scope operator, ::, to access the global object.

main_file.C

float x;            // A global object.

int main () {
float x;        // A local object with the same name.

x = 5.0;       // This refers to the local object.
::x = 7.0;     // This refers to the global object.
}
 

What happens if we need to access the global object in another file? The object has already been defined in main_file.C, so we should not set aside new memory for it. We can inform the compiler of the existence of the global object using the extern keyword.

another_file.C

extern float x;   // Declares the existence of a global object external to this file.

void do_something() {
x = 3;          // Refers to the global object defined in main_file.C.
}
 

2. Reference Types

(Ref. Lippman 3.6)

Reference types are a convenient alternative way to use the functionality that pointers provide. A reference is just a nickname for existing storage. The following example defines an integer object, i, and then it defines a reference variable, r, by the statement

int& r = i;

Be careful not to confuse this use of & with the address of operator. Also note that, unlike a pointer, a reference must be initialized at the time it is defined.

#include <stdio.h>

int main() {
int i = 0;
int& r = i;    // Create a reference to i.

i++;
printf(“r = %d\n”, r);
}
 

3. Functions in C++

Argument Passing

(Ref. Lippman 7.3)

Arguments can be passed to functions in two ways. These techniques are known as

Pass by value.

Pass by reference.

When an argument is passed by value, the function gets its own local copy of the object that was passed in. On the other hand, when an argument is passed by reference, the function simply refers to the object in the calling program.

// Pass by value.
void increment (int i) {
i++;                               // Modifies a local variable.
}

// Pass by reference.
void decrement (int& i) {
i–;                               // Modifies storage in the calling function.
}

#include <stdio.h>

int main () {
int k = 0;

increment(k);                   // This has no effect on k.
decrement(k);                   // This will modify k.
printf("%d\n", k);
}
 

Passing a large object by reference can improve efficiency since it avoids the overhead of creating an extra copy. However, it is important to understand the potentially undesirable side effects that can occur. If we want to protect against modifying objects in the calling program, we can pass the argument as a constant reference:

// Pass by reference.
void decrement (const int& i) {
i–;                               // This statement is now illegal.
}
 

Return by Reference

(Ref. Lippman 7.4)

A function may return a reference to an object, as long as the object is not local to the function. We may decide to return an object by reference for efficiency reasons (to avoid creating an extra copy). Returning by reference also allows us to have function calls that appear on the left hand side of an assignment statement. In the following contrived example, select_month() is used to pick out the month member of the object today and set its value to 9.

s_truct date {_
int day;
int month;
int year;
};

int& select_month(struct date &d) {
return d.month;
}

#include <stdio.h>

int main() {
struct date today;

select_month(today) = 9;                       // This is equivalent to: today.month = 9;
printf("%d\n", today.month);
}
 

Default Arguments

(Ref. Lippman 7.3.5)

C++ allows us to specify default values for function arguments. Arguments with default values must all appear at the end of the argument list. In the following example, the third argument of move() has a default value of zero.

void move(int dx, int dy, int dz = 0) {
// Move some object in 3D space.  If dz = 0, then move the object in 2D space.
}

int main() {
move(2, 3, 5);
move(2, 3);       // dz assumes the default value, 0.
}
 

Function Overloading

(Ref. Lippman 9.1)

In C++, two functions can share the same name as long as their signatures are different. The signature of a function is another name for its parameter list. Function overloading is useful when two or more functionally similar tasks need to be implemented in different ways. For example:

void draw(double center, double radius) {
// Draw a circle.
}

void draw(int left, int top, int right, int bottom) {
// Draw a rectangle.
}

int main() {
draw(0, 5);                  // This will draw a circle.
draw(0, 4, 6, 8);           // This will draw a rectangle.
}
 

Inline Functions

(Ref. Lippman 3.15, 7.6)

Every function call involves some overhead. If a small function has to be called a large number of times, the relative overhead can be high. In such instances, it makes sense to ask the compiler to expand the function inline. In the following example, we have used the inline keyword to make swap() an inline function.
 

inline void swap(int& a, int& b) {
int tmp = a;
a = b;
b = tmp;
}

#include <stdio.h>

main() {
int i = 2, j = 3;

swap(i, j);
printf(“i = %d j = %d\n”, i, j);
}
 

This code will be expanded as

main() {
int i = 2, j = 3;

int tmp = i;
i = j;
j = tmp;
printf(“i = %d j = %d\n”, i, j);
}

Whenever the compiler needs to expand a call to an inline function, it needs to know the function definition. For this reason, inline functions are usually placed in a header file that can be included where necessary. Note that the inline specification is only a recommendation to the compiler, which the compiler may choose to ignore. For example, a recursive function cannot be completely expanded inline.
 


4. Basic Input and Output

(Ref. Lippman 1.5)

C++ provides three predefined objects for basic input and output operations: cin, cout and cerr. All three objects can be accessed by including the header file iostream.h.
 

Reading from Standard Input: cin

cin is an object of type istream that allows us to read in a stream of data from standard input. It is functionally equivalent to the scanf() function in C. The following example shows how cin is used in conjunction with the >> operator. Note that the >> points towards the object into which we are reading data.
 

#include <iostream.h>                // Provides access to cin and cout.
#include <stdio.h>                     /* Provides access to printf and scanf. */

int main() {
int i;

cin » i;                                  // Uses the stream input object, cin, to read data into i.
scanf("%d", &i);                      /* Equivalent C-style statement. */

float a;
cin » i » a;                          // Reads multiple values from standard input.
scanf("%d%f", &i, &a);           /* Equivalent C-style statement. */
}
 

Writing to Standard Output: cout

cout is an object of type ostream that allows us to write out a stream of data to standard output. It is functionally equivalent to the printf() function in C. The following example shows how cout is used in conjunction with the << operator. Note that the << points away from the object from which we are writing out data.
 

#include <iostream.h>                // Provides access to cin and cout.
#include <stdio.h>                     /* Provides access to printf and scanf. */

int main() {
cout << “Hello World!\n”;       // Uses the stream output object, cout, to print out a string.
printf(“Hello World!\n”);          /* Equivalent C-style statement. */

int i = 7;
cout << “i = " << i << endl;     // Sends multiple objects to standard output.
printf(“i = %d\n”, i);                 /* Equivalent C-style statement. */
}
 

Writing to Standard Error: cerr

cerr is also an object of type ostream. It is provided for the purpose of writing out warning and error messages to standard error. The usage of cerr is identical to that of cout. Why then should we bother with cerr? The reason is that it makes it easier to filter out warning and error messages from real data. For example, suppose that we compile the following program into an executable named foo:

#include <iostream.h>

int main() {
int i = 7;
cout << i << endl;                                   // This is real data.
cerr << “A warning message” << endl;    // This is a warning.
}

We could separate the data from the warning by redirecting the standard output to a file, while allowing the standard error to be printed on our console.

athena% foo > temp
A warning message

athena% cat temp
7
 

5. Creating and Destroying Objects - Constructors  and Destructors

(Ref. Lippman 14.1-14.3)

Let’s take a closer look at how constructors and destructors work.

A Point Class

Here is a complete example of a Point class. We have organized the code into three separate files:

point.h contains the declaration of the class, which describes the structure of a Point object.

point.C contains the definition of the class i.e. the actual implementation of the methods.

point_test.C is a program that uses the Point class.

Our Point class has three constructors and one destructor.

Point();                               // The default constructor.
Point(float fX, float fY);       // A constructor that takes two floats.
Point(const Point& p);         // The copy constructor.
~Point();                             // The destructor.

These constructors can be respectively invoked by object definitions such as

Point a;
Point b(1.0, 2.0);
Point c(b);

The default constructor, Point(), is so named because it can be invoked without any arguments. In our example, the default constructor initializes the Point to (0,0). The second constructor creates a Point from a pair of coordinates of type float. Note that we could combine these two constructors into a single constructor which has default arguments:

Point(float fX=0.0, float fY=0.0);

The third constructor is known as a copy constructor since it creates one Point from another. The object that we want to clone is passed in as a constant reference. Note that we cannot pass by value in this instance because doing so would lead to an unterminated recursive call to the copy constructor. In this example, the destructor does not have to perform any clean-up operations. Later on, we will see examples where the destructor has to release dynamically allocated memory.

Constructors and destructors can be triggered more often than you may imagine. For example, each time a Point is passed to a function by value, a local copy of the object is created. Likewise, each time a Point is returned by value, a temporary copy of the object is created in the calling program. In both cases, we will see an extra call to the copy constructor, and an extra call to the destructor. You are encouraged to put print statements in every constructor and in the destructor, and then carefully observe what happens.
 

point.h

// Declaration of class Point.

#ifndef _POINT_H_
#define _POINT_H_

#include <iostream.h>

class Point {
// The state of a Point object. Property variables are typically
// set up as private data members, which are read from and
// written to via public access methods.
private:
float mfX;
float mfY;

// The behavior of a Point object.
public:
Point();                               // The default constructor.
Point(float fX, float fY);       // A constructor that takes two floats.
Point(const Point& p);         // The copy constructor.
~Point();                             // The destructor.
void print() {                       // This function will be made inline by default.
cout << “(” << mfX << “,” << mfY << “)” << endl;
}
void set_x(float fX);
float get_x();
void set_y(float fX);
float get_y();
};

#endif // _POINT_H_

point.C

// Definition class Point.

#include “point.h”

// A constructor which creates a Point object at (0,0).
Point::Point() {
cout << “In constructor Point::Point()” << endl;
mfX = 0.0;
mfY = 0.0;
}

// A constructor which creates a Point object from two
// floats.
Point::Point(float fX, float fY) {
cout << “In constructor Point::Point(float fX, float fY)” << endl;
mfX = fX;
mfY = fY;
}

// A constructor which creates a Point object from
// another Point object.
Point::Point(const Point& p) {
cout << “In constructor Point::Point(const Point& p)” << endl;
mfX = p.mfX;
mfY = p.mfY;
}

// The destructor.
Point::~Point() {
cout << “In destructor Point::~Point()” << endl;
}

// Modifier for x coordinate.
void Point::set_x(float fX) {
mfX = fX;
}

// Accessor for x coordinate.
float Point::get_x() {
return mfX;
}

// Modifier for y coordinate.
void Point::set_y(float fY) {
mfY = fY;
}

// Accessor for y coordinate.
float Point::get_y() {
return mfY;
}

point_test.C

// Test program for the Point class.

#include “point.h”

void main() {
Point a;
Point b(1.0, 2.0);
Point c(b);

// Print out the current state of all objects.
a.print();
b.print();
c.print();

b.set_x(3.0);
b.set_y(4.0);

// Print out the current state of b.
cout << endl;
b.print();
}

Operator Overloading

(Ref. Lippman 15.1-15.7, 15.9)

We have already seen how functions can be overloaded in C++. We can also overload operators, such as the + operator, for classes that we write. Note that operators for built-in types may not be created or modified. A complete list of overloadable operators can be found in Lippman, Table 15.1.

A Complex Number Class

In this example, we have overloaded the following operators: +, *, >, =, (), [] and the cast operator.

operator+() and operator*()

Let a and b be of type Complex. The + operator in the expression a+b, can be interpreted as

a.operator+(b)

where operator+() is a member function of class Complex. We then have to write this function so that it adds the real and imaginary parts of a and b and returns the result as a new Complex object. Note that the function returns by value, since it has to create a temporary object for the result of the addition.

Our implementation will also work for an expression like a+b+c. In this case, the operator will be invoked in the following order

    (a.operator+(b)).operator+(c)

The member operator+() will also work for an expression like

    a + 7.0

because we have a constructor that can create a Complex from a single double. However, the expression

    7.0 + a

will not work, because operator+() is a member of class Complex, and not the built-in double type.

To solve this problem, we can make the operator a global function, as we have done with operator*(). The expression 7.0 * a will be interpreted as

    operator*(7.0, a)

Since a global function does not automatically have access to the private data of class Complex, we can grant special access to operator*() by making it a friend function of class Complex. In general, friend functions and classes should be used sparingly, since they are a violation of the rule of encapsulation.

operator>()

We have overloaded this operator to allow comparison of two Complex numbers, based on their magnitudes.

operator=()

The = operator is designed to work with statements like

    a = b;
a = b = c;

These statements respectively interpreted as

    a.operator=(b);
    a.operator=(b.operator=(c));

In this case, the operator changes the object that invokes it, and it returns a reference to this object so that the second statement will work. The keyword this gives us a pointer to the current object within the class definition.

operator Point()

This operator allows us to convert a Complex object to a Point object. It can be used in an explicit cast, like

    Complex a;
    Point p;
    p = (Point)a;

but it could also be invoked implicitly, as in

    p = a;

Hence, a great deal of caution should be used when providing user-defined type conversions in this way. An alternative way to convert from a Complex to a Point is to give the Point class a constructor that takes a Complex as an argument. However, we might not have access to the source code for the Point class, if it were written by someone else.

Note that overloaded cast operators do not have a return type.

operator()()

This is the function call operator. It is invoked by a Complex object a as

    a()

Here we have overloaded operator()() to return true if the object has an imaginary part and false otherwise.

operator[]()

The overloaded subscript operator is useful if we wish to access fields of the object like array elements. In our example, we have made a[0] refer to the real part and a[1] refer to the imaginary part of the object.

operator<<()

The output operator cannot be overloaded as a member function because we do not have access to the ostream class to which the predefined object cout belongs. Instead, operator<<() is overloaded as a global function. We must return a reference to the ostream class so that the output operator can be concatenated. For example,

    cout << a << b;

will be invoked as

    operator<<((operator<<(cout, a),b)
 

complex.h

// Interface for class Complex.
#ifndef _COMPLEX_H_
#define _COMPLEX_H_

#include <iostream.h>
#include “point.h”

#ifndef DEBUG_PRINT
#ifdef _DEBUG
#define DEBUG_PRINT(str)  cout << str << endl;
#else
#define DEBUG_PRINT(str)
#endif
#endif

class Complex {
private:
double mdReal;
double mdImag;

public:
// This combines three constructors in one. Here we have used an initialization list to initialize
// the private data, instead of using assignment statements in the body of the constructor.
Complex(double dReal=0.0, double dImag=0.0) : mdReal(dReal), mdImag(dImag) {
DEBUG_PRINT(“In Complex::Complex(double dReal, double dImag)”)
}

// The copy constructor.
Complex(const Complex& c);
~Complex() {
DEBUG_PRINT(“In Complex::~Complex()”)
}
void print();

// Overloaded member operators.
Complex operator+(const Complex& c) const;      // Overloaded + operator.
int operator>(const Complex& c) const;                // Overloaded > operator.
Complex& operator=(const Complex& c);           // Overloaded = operator.
operator Point() const;                  // Overloaded cast-to-Point operator.
bool operator()(void) const;          // Overloaded function call operator.
double& operator[](int i);              // Overloaded subscript operator.

// Overloaded global operators. We make these operators friends of class
// Complex, so that they will have direct access to the private data.
friend ostream& operator<<(ostream& os, const Complex& c);       // Overloaded output operator.
friend Complex operator*(const Complex& c, const Complex& d);   // Overloaded * operator.
};
#endif    // _COMPLEX_H_
 

complex.C

// Implementation for class Complex.
#include “complex.h”
#include <stdlib.h>

// Definition of copy constructor.
Complex::Complex(const Complex& c) {
DEBUG_PRINT(“In Complex::Complex(const Complex& c)”)
mdReal = c.mdReal;
mdImag = c.mdImag;
}

// Definition of overloaded + operator.
Complex Complex::operator+(const Complex& c) const {
DEBUG_PRINT(“In Complex Complex::operator+(const Complex& c) const”)
return Complex(mdReal + c.mdReal, mdImag + c.mdImag);
}
 

// Definition of overloaded > operator.
int Complex::operator>(const Complex& c) const {
double sqr1 = mdReal * mdReal + mdImag * mdImag;
double sqr2 = c.mdReal * c.mdReal + c.mdImag * c.mdImag;

DEBUG_PRINT(“In int Complex::operator>(const Complex& c) const”)
return (sqr1 > sqr2);
}
 

// Definition of overloaded assignment operator.
Complex& Complex::operator=(const Complex& c) {
DEBUG_PRINT(“In Complex& Complex::operator=(const Complex& c)”)
mdReal = c.mdReal;
mdImag = c.mdImag;
return *this;
}
 

// Definition of overloaded cast-to-Point operator. This converts a Complex object to a Point object.
Complex::operator Point() const {
float fX, fY;

DEBUG_PRINT(“In Complex::operator Point() const”)
// Our Point class uses floats instead of doubles. In this case, we make a conscious decision
// to accept a loss in precision by converting the doubles to floats. Be careful when doing this!
fX = (float)mdReal;
fY = (float)mdImag;

return Point(fX, fY);
}

// Definition of overloaded function call operator. We have defined this operator to allow us to test
// whether a number is complex or real.
bool Complex::operator()(void) const {
DEBUG_PRINT(“In bool Complex::operator()(void) const”)
if (mdImag == 0.0)
return false;     // Number is real.
else
return true;      // Number is complex.
}

// Definition of overloaded subscript operator. We have defined this operator to allow access to
// the real and imaginary parts of the object.
double& Complex::operator[](int i) {
DEBUG_PRINT(“In double& Complex::operator[](int)”)
switch(i) {
case 0:
return mdReal;

case 1:
return mdImag;

default:
cerr << “Index out of bounds” << endl;
exit(0);                           // A function in the C standard library.
}
}

// Definition of a print function.
void Complex::print() {
cout << mdReal << " + j" << mdImag << endl;
}

// Definition of overloaded output operator. Note that this is a global function. We can
// access the private data of the Complex object c because the operator is a friend function.
ostream& operator<<(ostream& os, const Complex& c) {
DEBUG_PRINT(“In ostream& operator<<(ostream&, const Complex&)”)
cout << c.mdReal << " + j" << c.mdImag;
return os;
}

// Definition of overloaded * operator. By making this operator a global function, we can
// handle statements such as a = 7 * b, where a and b are Complex objects.
Complex operator*(const Complex& c, const Complex& d) {
DEBUG_PRINT(“In Complex operator*(const Complex& c, const Complex& d)”)
double dReal = c.mdReal*d.mdReal - c.mdImag*d.mdImag;
double dImag = c.mdReal*d.mdImag + c.mdImag*d.mdReal;
return Complex(dReal, dImag);
}
 

complex_test.C

#include “complex.h”

void main() {
Complex a;
Complex b;
Complex *c;
Complex d;

// Use of constructors and the overloaded operator=().
a = (Complex)2.0;                 // Same as a = Complex(2.0);
b = Complex(3.0, 4.0);
c = new Complex(5.0, 6.0);

// Use of the overloaded operator+().
d = a + b;                              // Same as d = a.operator+(b);
d.print();

d = a + b + *c;                      // Same as d = (a.operator+(b)).operator+(*c);
d.print();

// Use of the overloaded operator>().
if (b > a)
cout << “b > a” << endl;
else
cout << “b <= a” << endl;

// Use of cast-to-Point operator. This will convert a Complex object to a Point object.
// An alternative way to handle the type conversion is to give the Point class a constructor
// that takes a Complex object as an argument.
Point p;
p = (Point)b;
p.print();

// Use of the overloaded operator()().
if (a() == true)
cout << “a is a complex number” << endl;
else
cout << “a is a real number” << endl;

// Use of the overloaded operator[](). This will change the imaginary part of a.
a[1] = 8.0;
a.print();

// Use of the overloaded global operator<<().
cout << “a = " << a << endl;

// User of the overloaded global operator*(). The double literal constant will be passed as the
// first argument to operator*() and it will be converted to a Complex object using the Complex
// constructor.  This statement would not be legal if the operator were a member function.
d = 7.0 * b;

cout << “d = " << d << endl;
}

Contents

  1. Object-Oriented Design vs Procedural Design
  2. The HelloWorld Procedure and the HelloWorld Object
  3. C++ Data Types
  4. Expressions
  5. Coding Style

1. Object-Oriented Design vs Procedural Design

Many of you will already be familiar with one or more procedural languages. Examples of such languages are FORTRAN 77, Pascal and C. In the procedural programming paradigm, one focuses on the decomposition of software into various functional components. In other words, the program is organized into a collection of functions, (also known as procedures or subroutines), which will be executed in a defined order to produce the desired result.

By contrast, object-based programming focuses on the organization of software into a collection of components, called objects, that group together

  1. Related items of data, known as properties.
  2. Operations that are to be performed on the data, which are known as methods.

In other words, an object is a model of a real world concept, which posesses both state and behavior.
Programming languages that allow us to create objects are said to support abstract data types. Examples of such languages are CLU, Ada, Modula-2.

The object-oriented programming paradigm goes a step beyond abstract data types by adding two new features: inheritance and polymorphism. We will talk about these ideas in depth later, but for now it will be sufficient to say that their purpose is to facilitate the management of objects that have similar characteristics. For example, squares, triangles and circles are all instances of shapes. Their common properties are that they all have a centroid and an area. Their common method might be that they all need to be displayed on the screen of a computer. Examples of languages that support the object-oriented paradigm are C++, Java® and Smalltalk.

2. The HelloWorld Procedure and the HelloWorld Object

Let’s take a look two simple programs that print out the string, Hello World!

The HelloWorld Procedure

Here is the procedural version of the program, written in C. The first statement is a preprocessor directive that tells the compiler to include the contents of the header file stdio.h. We include this file because it declares the existence of the built-in function, printf(). Every C program must have a top-level function named main(), which provides the entry point to the program. 

#include <stdio.h>

/* The HelloWorld procedure definition. */
void HelloWorld() {
    printf(“Hello World!\n”);
}

/* The main program. */
int main() {
    HelloWorld();               /* Execute the HelloWorld procedure. */
    return 0;                      /* Indicates successful completion of the program. */
}

The HelloWorld Object

Here is the object-based version of the program, written in C++. We have created a new data type, HelloWorld, that is capable of printing out the words we want. In C++, the keyword class is used to declare a new data type. Our class has three publicly accessible methods, HelloWorld(), ~HelloWorld() and print(). The first two methods have special significance and they are respectively known as a constructor and the destructor. The constructor has the same name as the class. It is an initialization method that will be automatically invoked whenever we create a HelloWorld object. The destructor also has the same name as the class, but with a ~ prefix. It is a finalization method that will be automatically invoked whenever a HelloWorld object is destroyed. In our class, the print() method is the only one that actually does anything useful.

It is important to understand the distinction between a class and an object. A class is merely a template for creating one or more objects. Our main program creates a single object named a based on the class definition that we have provided. We then send the object a “print” message by selecting and invoking the print() method using the . operator. We are able to access the print() method in main() because we have made it a public member function of the class.
 

#include <stdio.h>

// The HelloWorld class definition.
class HelloWorld {
        public:
    HelloWorld() {}            // Constructor.
    ~HelloWorld() {}          // Destructor.
    void print() {
        printf(“Hello World!\n”);
    }
};                                      // Note that a semicolon is required here.

// The main progam.
int main() {
    HelloWorld a;           // Create a HelloWorld object.
    a.print();                  // Send a “print” message to the object.
    return 0;
}

C++ as a Superset of C

Although C++ is generally thought of as an object-oriented language, it does support the procedural progamming paradigm as well. In fact, C++ supports all the features of C in addition to providing new features of its own. For example, a C++ program may include C-style comments that use the /* */ delimiters as well C++-style comments that use the // syntax.

/* C-style comments are also allowed in C++. */

// Alternative comment syntax that is only allowed in C++.

3. C++ Data Types

Built-in Data Types

(Ref. Lippman 3.1, 3.2)

C++ built-in data types are similar to those found in C. The basic built-in types include

  • A boolean type: bool (only available in Standard C++).
  • Character types: char, unsigned char and wchar_t (wchar_t supports wide characters and is only available in Standard C++).
  • Integer types: short (or short int), int (or long int), unsigned short and unsigned int.
  • Floating point types: float, double and long double.

Not all computing platforms agree on the actual size of the built-in data types, but the following table indicates the typical sizes on a 32-bit platform:
 

BUILT-IN DATA TYPE SIZE IN BYTES
char, unsigned char 1
short, unsigned short 2
wchar_t, bool,
int, unsigned int, float 
4
double 8
long double 8 or 16

A literal constant is a constant value of some type. Examples of literal constants are
 

DATA TYPE LITERAL CONSTANT
char ‘a’, ‘7’
wchar_t L’a’, L'7’
bool true, false
long int 8L, 8l
unsigned long int 8UL, 8ul
float 2.718F, 2.718f
double 2.718, 1e-3
long double 2.718L, 2.718l

Note that there is no built-in data type for strings. Strings can be represented as character arrays or by using the string type provided in the Standard C++ library. A string literal constant is of the form

“Hello World!”

User-defined Data Types

We have already seen how we can define new data types by writing a class, and for the most part we will use classes. However, C++ also provides extended support for C-style structures. For example, C++ allows member functions to be packaged in a struct in addition to member data. The most significant difference between a class and a struct is that by default, the members of a class are private, whereas the members of a struct are public.

struct date {
    int day;
    int month;
    int year;
    void set_date(int d, int m, int y);    // Member functions only allowed in C++.
};

int main() {
    struct date a;   /* C-style definition. */
    date a;            // Allowable C++ definition.
}
 

Pointer Types

(Ref. Lippman 3.3)

Pointer variables (or pointers) are a powerful concept that allow us to manipulate objects by their memory address rather than by their name. It is important to understand pointers clearly since they are used extensively in this course and in real world software.

A pointer must convey two pieces of information, both of which are necessary to access the object that it points to:

  1. the memory address of the object
  2. the type of the object

The following example illustrates the use of a pointer to an object of type double. The pointer is defined by the statement

double *p;

p can now hold the memory address of a double object, such as d. We obtain the address of d by applying the address of operator, &d, and we then store it in p. Now that p contains a valid address, we can refer to the object d by applying the dereference operator, *p. Notice that we have used * in two different contexts, with different meanings in each case. The meaning of & also depends on the context in which it is used.

#include <stdio.h>

int main() {
    double d;         // An double object.
    double *p;       // A variable that is a pointer to an double.

p = &d;          // Take the memory address of d and store it in p.

d = 7.0;          // Store a double precision number in d.
    printf(“The value of the object d is %lf\n”, d);
    printf(“The value of the object that p points to is %lf\n”, *p);
    printf(“The address of the object that p points to is %u\n”, p);
}
 
Here is the output from a trial run:

The value of the object d is 7.000000
The value of the object that p points to is 7.000000
The address of the object that p points to is 4026528296

Reference Types

(Ref. Lippman 3.6)

As an added convenience, C++ provides reference types, which are an alternative way to use the functionality that pointers provide. A reference is just a nickname for existing storage.

The following example defines an integer object, i, and then it defines a reference variable, r, by the statement

int& r = i;

Be careful not to confuse this use of & with the address of operator. Also note that, unlike a pointer, a reference must be initialized at the time it is defined.

#include <stdio.h>

int main() {
    int i = 0;
    int& r = i;    // Create a reference to i.

i++;
    printf(“r = %d\n”, r);
}

Explicit Type Conversion

(Ref. Lippman 4.14)

Explicit type conversion can be performed using a cast operator. The following code shows three alternative ways to explicitly convert an int to a float.

int main() {
    int a;
    float b;

a = 3;
    b = (float)a;                           /* C-style cast operator. */
    b = float(a);                          // Alternative type conversion notation allowed in C++.
    b = static_cast<float>(a);    // A second alternative, allowed only in Standard C++.
}

const Keyword

(Ref. Lippman 3.5)

The const keyword is used to designate storage whose contents cannot be changed. A const object must be initialized at the time it is defined.

const int i = 10;    /* Allowed both in C and C++. */
const int j;            /* This is illegal. */

Variable Definitions

In C++, variable definitions may occur practically anywhere within a code block. A code block refers to any chunk of code that lies within a pair of scope delimiters, {}. For example, the following C program requires i and j to be defined at the top of the main() function.

#include <stdio.h>

int main() {
    int i, j;    /* C requires variable definitions to be at the top of a code block. */

for (i = 0; i < 5; i++) {
         printf(“Done with C\n”);
    }
    j = 10;
}

In the C++ version of the program, we can define the variables i and j when they are first used.

#include <stdio.h>

int main() {
    for (int i = 0; i < 5; i++) {         // In Standard C++, i is available anywhere within the for loop.
        printf(“Still learning C++\n”);
     }
    int j = 10;
}

4. Expressions

(Ref. Lippman 4.1-4.5, 4.7, 4.8, 4.13, 4.14)

Operator Precedence

An expression consists of one or more operands and a set of operations to be applied to them. The order in which operators are applied to operands is determined by operator precedence. For example, the expression

1 + 4 * 3 / 2 == 7 && !0

is evaluated as

((1 + ((4 * 3) / 2)) == 7) && (!0)

Note that the right hand side of the logical AND operator is only evaluated if the left hand side evaluates to true. (For a table of operator precedence, see Lippman, Table 4.4.)

Arithmetic Conversions

The evaluation of arithmetic expressions follows two general guidelines:

  1. Wherever necessary, types are promoted to a wider type in order to prevent the loss of precision.
  2. Integral types (these are the various boolean, character and integer types) are promoted to the int data type prior to evaluation of the expression.

5. Coding Style

Coding styles tend to vary from one individual to another. While you are free to develop your own style, it is important to make your code consistent and readable. Software organizations frequently try to enforce consistency by developing a set of coding guidelines for programmers.

Here is an example of an inconsistent coding style. The curly braces in the two for loops are aligned differently. The second style is usually preferred because it is more compact and it avoids excessive indentation.
 

#include <stdio.h>

int main() {
    int i;

for (i = 0; i < 5; i++)
       {
            printf(“This convention aligns the curly braces.\n”);
       }

for (i =0; i < 5; i++) {
        printf(“This is a more compact convention which aligns “);
        printf(“the closing brace with the for statement.\n”);
    }
}

Physical Simulation Example

The following example shows how you might integrate a numerical simulation with a Java® animation. Here, we have considered a simple spring-mass-damper system of the form

    d2x/dt2 + 2xw0dx/dt + w02 x = 0

and computed the solution numerically using an explicit Forward Euler time-stepping scheme. Try increasing the size of the time step and notice that the simulation becomes unstable when the growth factors become larger than 1. When using explicit time integration, we must therefore choose our time step to be smaller than the critical time step.

Now try using an implicit Backward Euler time-stepping scheme. In this case, the simulation remains stable even for large time steps because the growth factors are always smaller than 1. You may also wish to determine the exact solution to the differential equation and compare it to the numerical solution.

import java.awt.*;
import java.awt.event.*;
import javax.swing.*;

class Vec2D {
private float[] vec;

public Vec2D(float fX, float fV) {
vec = new float[2];
vec[0] = fX;
vec[1] = fV;
}

public void translate(float fDx) {
vec[0] += fDx;
}

public void setPos(float fX) {
vec[0] = fX;
}

public void setVel(float fV) {
vec[1] = fV;
}

public float getPos() {
return vec[0];
}

public float getVel() {
return vec[1];
}
}

class Matrix2D {
private float[][] mat;

public Matrix2D(float a11, float a12, float a21, float a22) {
mat = new float[2][2];
mat[0][0] = a11;
mat[0][1] = a12;
mat[1][0] = a21;
mat[1][1] = a22;
}

public void multiply(Vec2D vec) {
float fX = mat[0][0] * vec.getPos() + mat[0][1] * vec.getVel();
float fV = mat[1][0] * vec.getPos() + mat[1][1] * vec.getVel();
vec.setPos(fX);
vec.setVel(fV);
}

public void invert() {
float det = mat[0][0]*mat[1][1]-mat[0][1]*mat[1][0];
float tmp = mat[0][0];
mat[0][0] = mat[1][1] / det;
mat[1][1] = tmp / det;
mat[0][1] = -mat[0][1] / det;
mat[1][0] = -mat[1][0] / det;
}
}

class Ball {
Vec2D mXState;  // x position and velocity.
Vec2D mYState;  // y position and velocity.
Matrix2D mMatrix;
int miRadius;
int miWindowWidth, miWindowHeight;

public Ball(int iRadius, int iW, int iH) {
miRadius = iRadius;
mXState = new Vec2D(0.0f, 0.0f);
mYState = new Vec2D(0.0f, 0.0f);
miWindowWidth = iW;
miWindowHeight = iH;
}

public void setPosition(float fXPos, float fYPos) {
mXState.setPos(fXPos);
mYState.setPos(fYPos);
}

public void setVelocity(float fXVel, float fYVel) {
mXState.setVel(fXVel);
mYState.setVel(fYVel);
}

public void setParams(float fXi, float fW0, float fDt, boolean explicit) {
float fReal1 = 0.0f, fImag1 = 0.0f;  // First eigenvalue.
float fReal2 = 0.0f, fImag2 = 0.0f;  // Second eigenvalue.
float G1, G2;  // Growth factors.

// Determine the eigenvalues.
if (fXi < 1.0f) {
fReal1 = fReal2 = -fW0*fXi;
fImag1 = (float)(fW0*Math.sqrt(1-fXi*fXi));
fImag2 = -fImag1;
System.out.println(“System is underdamped.”);
System.out.println(“Eigenvalues are: " + fReal1 + " +/- " + fImag1 + “i”);
}
else {
fReal1 = -fW0*(fXi + (float)Math.sqrt(fXi*fXi-1));
fReal2 = -fW0*(fXi - (float)Math.sqrt(fXi*fXi-1));
System.out.println(“System is overdamped or critically damped.”);
System.out.println(“Eigenvalues are: " + fReal1 + " and " + fReal2 + “i”);
}

if (explicit) {
// Forward Euler.
mMatrix = new Matrix2D(1.0f, fDt, -fW0*fW0*fDt, 1-2*fXi*fW0*fDt);

G1 = (float)Math.sqrt(Math.pow(1+fReal1*fDt,2.0) + Math.pow(fImag1*fDt,2.0));
G2 = (float)Math.sqrt(Math.pow(1+fReal2*fDt,2.0) + Math.pow(fImag2*fDt,2.0));
}
else {
// Backward Euler.
mMatrix = new Matrix2D(1.0f, -fDt, fW0*fW0*fDt, 1+2*fXi*fW0*fDt);
mMatrix.invert();

G1 = (float)(1.0/Math.sqrt(Math.pow(1-fReal1*fDt,2.0) + Math.pow(fImag1*fDt,2.0)));
G2 = (float)(1.0/Math.sqrt(Math.pow(1-fReal2*fDt,2.0) + Math.pow(fImag2*fDt,2.0)));
}
System.out.println(“Growth factors are " + G1 + " and " + G2);
}

public int getXPos() {
return (int)mXState.getPos();
}

public int getYPos() {
return (int)mYState.getPos();
}

public void draw(Graphics g) {
g.setColor(Color.red);
g.fillOval(miWindowWidth/2+(int)mXState.getPos()-miRadius,
miWindowHeight/2+(int)mYState.getPos()-miRadius,
2*miRadius, 2*miRadius);
}

// Update the position of the ball.
void move() {
mMatrix.multiply(mYState);
}
}

public class Animation extends JApplet implements Runnable, ActionListener {
int miFrameNumber = 0;
int miTimeStep;
Thread mAnimationThread;
boolean mbIsPaused = false;
Button mButton;
Ball ball;

public void init() {
// Time step in milliseconds.
miTimeStep = 20; // Try changing this to (a) 50 ms and (b) 60 ms.

// Initialize the parameters of the ball.  The parameters refer to the
// differential equation:  d^2 x/dt^2 + 2 xi w0 dx/dt + w0^2 x = 0

int iRadius = 15;
float fXPos = 0.0f;      // Initial x displacement
float fYPos = 100.0f;    // Initial y displacement
float fXVel = 0.0f;      // Initial x velocity
float fYVel = 0.0f;      // Initial y velocity
float fXi = 0.05f;       // xi
float fW0 = 2.0f;        // w0
boolean explicit = true; // true: forward Euler, false: backward Euler

ball = new Ball(iRadius, getSize().width, getSize().height);
ball.setPosition(fXPos, fYPos);
ball.setVelocity(fXVel, fYVel);
ball.setParams(fXi, fW0, miTimeStep/1000.0f, explicit);

// Create a button to start and stop the animation.
mButton = new Button(“Stop”);
getContentPane().add(mButton, “North”);
mButton.addActionListener(this);

// Create a JPanel subclass and add it to the JApplet.  All drawing
// will be done here, do we must write the paintComponent() method.
// Note that the anonymous class has access to the private data of
// class Animation, because it is defined locally.
getContentPane().add(new JPanel() {
public void paintComponent(Graphics g) {
// Paint the background.
super.paintComponent(g);

// Display the frame number.
g.drawString(“Frame " + miFrameNumber, getSize().width/2 - 40,
getSize().height - 15);

// Draw the rubber band.
g.drawLine(getSize().width/2, 0,
getSize().width/2+ball.getXPos(),
getSize().height/2+ball.getYPos());

// Draw the ball.
ball.draw(g);
}
}, “Center”);
}

public void start() {
if (mbIsPaused) {
// Don’t do anything.  The animation has been paused.
} else {
// Start animating.
if (mAnimationThread == null) {
mAnimationThread = new Thread(this);
}
mAnimationThread.start();
}
}

public void stop() {
// Stop the animating thread by setting the mAnimationThread variable
// to null.  This will cause the thread to break out of the while loop,
// so that the run() method terminates naturally.
mAnimationThread = null;
}

public void actionPerformed(ActionEvent e) {
if (mbIsPaused) {
mbIsPaused = false;
mButton.setLabel(“Stop”);
start();
} else {
mbIsPaused = true;
mButton.setLabel(“Start”);
stop();
}
}

public void run() {
// Just to be nice, lower this thread’s priority so it can’t
// interfere with other processing going on.
Thread.currentThread().setPriority(Thread.MIN_PRIORITY);

// Remember the starting time.
long startTime = System.currentTimeMillis();

// Remember which thread we are.
Thread currentThread = Thread.currentThread();

// This is the animation loop.
while (currentThread == mAnimationThread) {
// Draw the next frame.
repaint();

// Advance the animation frame.
miFrameNumber++;

// Update the position of the ball.
ball.move();

// Delay depending on how far we are behind.
try {
startTime += miTimeStep;
Thread.sleep(Math.max(0,
startTime-System.currentTimeMillis()));
}
catch (InterruptedException e) {
break;
}
}
}
}

Topics

  1. Introduction
  2. Online Java® Resources
  3. Applications and Applets
  4. Java® Basics

1. Introduction

Java® is an object-oriented programming language that resembles C++ in many respects. One of the major differences is that Java® programs are intended to be architecture-neutral i.e. a Java® program should, in theory, be able to run on a Unix® workstation, a PC or a Macintosh® without recompilation. In C++, we compiled our programs into machine-dependent object code that was linked to produce an executable. By contrast, Java® programs are compiled into machine-independent byte code. The compiled program is then run within a Java® interpreter, which is responsible for executing the byte code instructions. The Java® interpreter is typically referred to as the Java® Virtual Machine, and it must be present on each computer that runs the program.
 

Java® compiler

Java® interpreter

myprog.java

-————->

myprog.class

-—————>

Program output

javac

java
appletviewer
netscape

Follow this link to see Sun Microsystems’ overview: About the Java® Technology

It takes time to learn everything about Java® and it is important to set your expectations accordingly. There are two main challenges:

  • Learning the basic syntax of the language.
  • Gaining familiarity with the libraries of reusable software components that are available to Java® programmers, especially the commonly used parts of the Java® Core API (Application Programming Interface).

In the lectures that follow, we will attempt to familiarize you with the basic syntax, and point out the syntactic and semantic differences between Java® and C++. We will also introduce you to some of the more important class libraries. The Java® API is well documented and you should quickly learn how to navigate the online documentation to find the classes that you need.

The Java® language is still evolving. We will be using the Java® 2 platform, which is also known as the Java® Development Kit (JDK). The latest release is Java® 2 version 1.3. Be prepared to encounter bugs in the implementation of the language from time to time. This includes inconsistencies across hardware platforms. Also note that the latest version of Java® is not supported by the Netscape® browser.

2. Online Java® Resources

Here is an online Java® tutorial.

3. Applications and Applets

An application is a stand-alone program that executes independently of a browser. It is usually launched from the command line, using the command line interpreter, java.

An applet is a program that can be embedded in an HTML page. The program can be run by loading the page into a Java®-enabled browser. The JDK includes a tool, called appletviewer, that can also be used to view applets.

A Java® program can be designed to function

  • as an application
  • as an applet
  • both as an application and as an applet

The Hello World Application

Here is an example of a simple Java® application. We make our program an application by writing a class, HelloWorldApp, that contains a function named main(). We can compile our program by typing

javac HelloWorld.java

This will produce a file named HelloWorldApp.class.

We can run the program as application by typing

java HelloWorldApp

The command line interpreter looks for a function named main() in the HelloWorldApp class and then executes it.

Points to Note

The .class file gets its name from the name of the class and not the name of the source file. In this example we deliberately gave the source file a different name, but in practice, we will place each class in a separate file with the same name. This convention becomes important when we want to write a class that is publicly accessible.
 

Global functions are not allowed in Java®. This is why we placed our main() function inside our class. We must make our main() function static, since it should not be associated with a particular object, and we must also make it public, since it is the entry point to our program.

HelloWorld.java

class HelloWorldApp {
public static void main(String[] args) {
System.out.println(“Hello World!”);
}
}

The Hello World Applet

Here is an example of a simple Java® applet. We make our program an applet by writing a class, HelloWorld, that inherits the JApplet class provided in the Java® Swing API. The extends keyword declares that class HelloWorld inherits class JApplet. Before we can refer to the JApplet class, we must declare its existence using the import keyword. Here we have imported the JApplet class, which belongs to the javax.swing package, and we have also imported the Graphics class, which belongs to the java.awt package.

The JApplet class possesses a method named paint(), which it inherits from one of its superclasses. Our HelloWorld class inherits this paint() method when it inherits the JApplet class. The purpose of paint() is to draw the contents of the applet. Unfortunately, the default paint() method that we inherit cannot do anything useful since it has no way of knowing what we want to draw. We must therefore override the default paint() in our HelloWorld class. (Note that while C++ requires the use of the virtual keyword to indicate function overriding, Java® does not require us to inform the compiler that overriding will take place.)

The paint() method receives as an argument a Graphics object, which contains information about where and how we can draw. In this example, we choose to draw the text “Hello World!” at coordinates (50,25) by calling the drawString method.

We can compile our program by typing

    javac HelloWorld.java

This produces a file named HelloWorld.class. We now embed our applet in an HTML file, Hello.html, and we can run it by typing

    appletviewer Hello.html

We can also view Hello.html in a Java®-enabled browser.
 

HelloWorld.java

import javax.swing.JApplet;
import java.awt.Graphics;

public class HelloWorld extends JApplet {
public void paint(Graphics g) {
g.drawString(“Hello world!”, 50, 25);
}
}
 

Hello.html

<HTML>
<HEAD>
<TITLE> A Simple Program </TITLE>
</HEAD>

<BODY>
Here is the output of my program:
<APPLET CODE=“HelloWorld.class” WIDTH=150 HEIGHT=25>
</APPLET>
</BODY>
</HTML> 


4. Java Basics

Java Data Types

Java® has two main categories of data types: primitive data types and reference data types. Java® does not support the notion of pointers.

Here is a list of primitive data types.

PRIMITIVE DATA TYPE SIZE IN BYTES / FORMAT
byte 1
char, short 2
int, float  4
long, double 8
boolean true or false

Reference data types include arrays and classes. Here is an example of a Line class.

Line.java

class Line {

private int miX1, miX2, miY1, miY2;

public Line() {
miX1 = miX2 = miY1 = miY2 = 0;
}

public Line(int iX1, int iX2, int iY1, int iY2) {
miX1 = iX1;
miX2 = iX2;
miY1 = iY1;
miY2 = iY2;
}
}

Creating Objects - Constructors

In Java®, objects of user defined data types must be dynamically created. In the following example, the first statement declares a Line object, but does not actually create it. The second statement uses the new operator to actually create the object. Note the subtle differences between Java® and C++.

Line line;                        // Declaration of object (does not create object.)
line = new Line();            // Instantiation of object.

Garbage Collection and Finalization

The Java® runtime system provides a garbage collector, which periodically destroys any unused objects in dynamic memory. The Java® garbage collector uses a mark-sweep algorithm. The dynamic memory is first scanned for referenced objects and then all remaining objects are treated as garbage. Prior to deleting an object, the garbage collector will call the object’s finalizer, which allows the object to perform an orderly cleanup of any associated system resources, such as open files.

Finalization and garbarge collection happen asynchronously in the background. It is also possible to force these tasks to occur using the System.runFinalization() and System.gc() commands.

A finalizer has the form

protected void finalize() throws Throwable {

// Clean up code for this class here.

super.finalize();  // Call the superclass’s finalizer (if provided.)
}

Inheritance

As indicated above, the extends keyword allows us to write classes that inherit the properties and methods of another class.

class SubClassName extends SuperClassName {

}

If a superclass name is not specified, the superclass is assumed to be java.lang.Object. Also, note that each class can have only one immediate superclass i.e. Java® does not support multiple inheritance.

Packages

A package is a group of related classes or interfaces. Each package defines its own namespace. Thus, two different packages may contain classes with the same name.

We can create a package by placing a package statement at the top of every source file that defines a class belonging to the package. We may later use the classes in the package by placing an import statement at the top of the source file that needs to access the classes in the package.
 

graphics/Line.java (Path of the file is relative to the CLASSPATH environment variable.)

package graphics;       // Class Line belongs to package graphics.

public class Line {      // The public class modifier makes this class accessible outside the package.

}

MyTest.java

import graphics.*;     // Provides access to all public classes in package graphics.

class MyTest {
public static void main(String[] args) {
Line line;
graphics.Line line2;   // Can be used for conflict resolution if two packages have a Line class.
line = new Line(0,0,3,4);
line2 = new Line();
}
}
 

If a package name is not specified for a class, then the class belongs to the default package. The default package has no name and it is always imported.

Here are a few of the core Java® packages:

  • java.lang            - core Java® language.
  • java.io               - input/output streams.
  • java.util             - utility classes, e.g. Stack, Vector, Hashtable, Oberserver/Observable.
  • java.net             - networking classes.
  • java.security      - security classes.
  • javax.swing       - Swing Graphical User Interface (GUI) components (the new preferred way).
  • java.awt            - Abstract Window Toolkit GUI components (the old way).
  • java.awt.image  - image processing.

Member Access Specifiers

There are four types of member access levels: private, protected, public and package. Note that, unlike C++, we must specify access levels on a per-member basis.

class Access {
private privateMethod();                      // Access level is “private”.
protected protectedMethod();               // Access level is “protected”.
public publicMethod();                        // Access level is “public”.
packageMethod();                               // Access level is “package”.
}

ACCESS SPECIFIER ACCESSIBLE BY CLASS DEFINITION ACCESSIBLE BY SUBCLASS DEFINITION ACCESSIBLE BY REST OF PACKAGE ACCESSIBLE BY REST OF WORLD
private yes no no no
protected yes yes yes no
public yes yes yes yes
none i.e. package

yes no yes no

Instance and Class members

As in C++, we can have instance members or class members. A class member is declared using the static keyword.

class MyPoint {
int x;
int y;
static int x_origin;
static int y_origin;
}

In this example, every object has its own x member, however, all objects share a single x_origin member. 

Constant Members

A final variable is one whose value cannot be changed e.g.

class Avo {
final double AVOGADRO = 6.023e23;
}

Class Modifiers

We have already seen some examples of member modifiers, such as public and private. Java® also allows us to specify class modifiers.

  • A public class is one which can be used by objects outside the current package e.g. package graphics;       // Class Line belongs to package graphics.

    public class Line {      // The public class modifier makes this class accessible outside the package.

    }
     

  • An abstract class is one which cannot be instantiated, and must be subclassed instead. An abstract class may contain abstract methods i.e. methods with no implementation, however, it may also provide default implementations for other methods.  e.g.

    abstract class GraphicObject {
    int x, y;

    void moveTo(int newX, int newY) {

    }
    abstract void draw();  //  This means that the class must be made abstract.
    }

    class Circle extends GraphicObject {
    void draw() {

    }
    }
     

  • A final class is one which cannot be subclassed. This may be required for security or design reasons. e.g.

    final class String {

    }

    It is also possible to make individual methods final.

Topics

  1. Point.java
  2. Shape.java
  3. Circle.java
  4. Square.java
  5. Main.java

1. Point.java

public class Point
{
private float mfX, mfY;

public Point() {
mfX = mfY = 0.0f;
}

public Point(float fX, float fY) {
mfX = fX;
mfY = fY;
}

public Point(Point p) {
mfX = p.mfX;
mfY = p.mfY;
}

// You will generally not need to write a finalizer. Member variables that
// are of reference type will be automatically garbage collected once they
// are no longer in use. Finalizers are only for cleaning up system resources,
// e.g. closing files.
protected void finalize() throws Throwable {
System.out.print(“In Point finalizer: “);
print();
super.finalize();  // If you have to write a finalizer, be sure to do this.
}

public void print() {
System.out.println(“Point print: (” + mfX + “,” + mfY + “)”);
}
}

2. Shape.java

public abstract class Shape
{
private Point mCenter;
protected static int miCount = 0;  // An example of a static member variable.

public Shape() {
mCenter = new Point();
}

public Shape(Point p) {
mCenter = new Point(p);
}

// You will generally not need to write a finalizer. Member variables that
// are of reference type (i.e. mCenter) will be automatically garbage collected
// once they are no longer in use. Finalizers are only for cleaning up system
// resources, e.g. closing files.
protected void finalize() throws Throwable {
System.out.print(“In Shape finalizer: “);
print();
super.finalize();  // If you have to write a finalizer, be sure to do this.
}

public void print() {
System.out.print(“Shape print: mCenter = “);
mCenter.print();
}

// An example of a static member function.
public static int getCount() {
return miCount;  // Can only access static members in static functions.
}
}

3. Circle.java

public class Circle extends Shape
{
private float mfRadius;

public Circle() {
super();  // Call the base class constructer.
mfRadius = 0.0f;
miCount++;  // Can access this because it is protected in base class.
}

public Circle(float fX, float fY, float fRadius) {
super(new Point(fX, fY));  // Call the base class constructer.
mfRadius = fRadius;
miCount++;
}

public Circle(Point p, float fRadius) {
super(p);  // Call the base class constructer.
mfRadius = fRadius;
miCount++;
}

// You will generally not need to write a finalizer. Member variables that
// are of reference type (i.e. mCenter) will be automatically garbage collected
// once they are no longer in use. Finalizers are only for cleaning up system
// resources, e.g. closing files.
protected void finalize() throws Throwable {
System.out.print(“In Circle finalizer: “);
print();
super.finalize();  // If you have to write a finalizer, be sure to do this.
}

public void print() {
System.out.print(“Circle print: mfRadius = " + mfRadius + " “);
super.print();
}
}

4. Square.java

public class Square extends Shape
{
private float mfLength;

public Square() {
super();  // Call the base class constructer.
mfLength = 0.0f;
miCount++;  // Can access this because it is protected in base class.
}

public Square(float fX, float fY, float fLength) {
super(new Point(fX, fY));  // Call the base class constructer.
mfLength = fLength;
miCount++;
}

public Square(Point p, float fLength) {
super(p);  // Call the base class constructer.
mfLength = fLength;
miCount++;
}

// You will generally not need to write a finalizer. Member variables that
// are of reference type (i.e. mCenter) will be automatically garbage collected
// once they are no longer in use. Finalizers are only for cleaning up system
// resources, e.g. closing files.
protected void finalize() throws Throwable {
System.out.print(“In Square finalizer: “);
print();
super.finalize();  // If you have to write a finalizer, be sure to do this.
}

public void print() {
System.out.print(“Square print: mfLength = " + mfLength + " “);
super.print();
}
}

5. Main.java

public class Main
{
final static int MAX = 3;  // An example of a constant class member variable.

public static void main(String[] args)
{
// Create some Point objects.
Point a;
a = new Point();
a.print();

Point b;
b = new Point(2,3);
b.print();

Point c = new Point(b);
c.print();

// Print out the total number of Shapes created so far. At this point,
// no Shapes have been created, however, we can still access static member
// function Shape.getCount().
System.out.println(“Total number of Shapes = " + Shape.getCount());

// Create a Circle object and hold on to it using a Shape reference.
Shape s;
s = new Circle(a,1);
s.print(); // This will call the print method in Circle.

// Create an array of Shapes.
Shape[] shapeArray;
shapeArray = new Shape[MAX];  // An array of Shape references.

shapeArray[0] = new Square();
shapeArray[1] = new Circle(4,5,2);
shapeArray[2] = new Square(3,3,1);

// Print out the array of Shapes. The length member gives the array size.
for (int i = 0; i < shapeArray.length; i++) {
shapeArray[i].print();
}

// Print out the total number of Shapes created so far. At this point,
// 4 Shapes have been created.
System.out.println(“Total number of Shapes = " + Shape.getCount());

// We can mark the objects for destruction by removing all references to
// them. Normally, we do not need to call the garbage collector explicitly.
// Note: here we have not provided a way to decrement the Shape counter.
a = b = c = null;
s = null;
for (int i = 0; i < shapeArray.length; i++) {
shapeArray[i] = null;
}
shapeArray = null;
}
}

Topics

  1. Introduction
  2. Performance Criteria
  3. Selection Sort
  4. Insertion Sort
  5. Shell Sort
  6. Quicksort
  7. Choosing a Sorting Algorithm

1. Introduction

Sorting techniques have a wide variety of applications. Computer-Aided Engineering systems often use sorting algorithms to help reason about geometric objects, process numerical data, rearrange lists, etc. In general, therefore, we will be interested in sorting a set of records containing keys, so that the keys are ordered according to some well defined ordering rule, such as numerical or alphabetical order. Often, the keys will form only a small part of the record. In such cases, it will usually be more efficient to sort a list of keys without physically rearranging the records.

2. Performance Criteria

There are several criteria to be used in evaluating a sorting algorithm:

  • Running time. Typically, an elementary sorting algorithm requires O(N2) steps to sort N randomly arranged items. More sophisticated sorting algorithms require O(N log N) steps on average. Algorithms differ in the constant that appears in front of the N2 or N log N. Furthermore, some sorting algorithms are more sensitive to the nature of the input than others. Quicksort, for example, requires O(N log N) time in the average case, but requires O(N2) time in the worst case. 
  • Memory requirements. The amount of extra memory required by a sorting algorithm is also an important consideration. In place sorting algorithms are the most memory efficient, since they require practically no additional memory. Linked list representations require an additional N words of memory for a list of pointers. Still other algorithms require sufficent memory for another copy of the input array. These are the most inefficient in terms of memory usage. 
  • Stability. This is the ability of a sorting algorithm to preserve the relative order of equal keys in a file.

Examples of elementary sorting algorithms are: selection sort, insertion sort, shell sort and bubble sort. Examples of sophisticated sorting algorithms are quicksort, radix sort, heapsort and mergesort. We will consider a selection of these algorithms which have widespread use. In the algorithms given below, we assume that the array to be sorted is stored in the memory locations a[1],a[2],…,a[N]. The memory location a[0] is reserved for special keys called sentinels, which are described below.

3. Selection Sort

This “brute force’’ method is one of the simplest sorting algorithms.

Approach

  • Find the smallest element in the array and exchange it with the element in the first position.
  • Find the second smallest element in the array and exchange it with the element in the second position.
  • Continue this process until done.

Here is the code for selection sort:

Selection.cpp

#include “Selection.h”     // Typedefs ItemType.

inline void swap(ItemType a[], int i, int j) {
ItemType t = a[i];
a[i] = a[j];
a[j] = t;
}

void selection(ItemType a[], int N) {
int i, j, min;

for (i = 1; i < N; i++) {
min = i;
for (j = i+1; j <= N; j++)
if (a[j] < a[min])
min = j;
swap(a,min,i);
}
}

Selection sort is easy to implement; there is little that can go wrong with it. However, the method requires O(N2) comparisons and so it should only be used on small files. There is an important exception to this rule. When sorting files with large records and small keys, the cost of exchanging records controls the running time. In such cases, selection sort requires O(N) time since the number of exchanges is at most N.

4. Insertion Sort

This is another simple sorting algorithm, which is based on the principle used by card players to sort their cards.

Approach

  • Choose the second element in the array and place it in order with respect to the first element.
  • Choose the third element in the array and place it in order with respect to the first two elements.
  • Continue this process until done.

Insertion of an element among those previously considered consists of moving larger elements one position to the right and then inserting the element into the vacated position.

Here is the code for insertion sort:

Insertion.cpp

#include “Insertion.h”         // Typedefs ItemType.

void insertion(ItemType a[], int N) {
int i, j;
ItemType v;

for (i = 2; i <= N; i++) {
v = a[i];
j = i;
while (a[j-1] > v) {
a[j] = a[j-1];
j–;
}
a[j] = v;
}
}

It is important to note that there is no test in the while loop to prevent the index j from running out of bounds. This could happen if v is smaller than a[1],a[2],…,a[i-1]. To remedy this situation, we place a sentinel key in a[0], making it at least as small as the smallest element in the array. The use of a sentinel is more efficient than performing a test of the form while (j > 1 &&  a[j-1] > v). Insertion sort is an O(N2) method both in the average case and in the worst case. For this reason, it is most effectively used on files with roughly N < 20. However, in the special case of an almost sorted file, insertion sort requires only linear time.

5. Shell Sort

This is a simple, but powerful, extension of insertion sort, which gains speed by allowing exchanges of non-adjacent elements.

Definition

An h-sorted file is one with the property that taking every _h_th element (starting anywhere) yields a sorted file.

Approach

  • Choose an initial large step size, hK, and use insertion sort to produce an hK-sorted file.
  • Choose a smaller step size, hK-1, and use insertion sort to produce an hK-1-sorted file, using the hK-sorted file as input.
  • Continue this process until done. The last stage uses insertion sort, with a step size h1 = 1, to produce a sorted file.

Each stage in the sorting process brings the elements closer to their final positions. The method derives its efficiency from the fact that insertion sort is able to exploit the order present in a partially sorted input file; input files with more order to them require a smaller number of exchanges. It is important to choose a good sequence of increments. A commonly used sequence is (3K-1)/2,…,121,40,13,4,1, which is obtained from the recurrence hk = 3 hk+1+1. Note that the sequence obtained by taking powers of 2 leads to bad performance because elements in odd positions are not compared with elements in even positions until the end.

Here is the complete code for shell sort:

Shell.cpp

#include “Shell.h”         // Typedefs ItemType.

void shell(ItemType a[], int N) {
int i, j, h;
ItemType v;

for (h = 1; h < = N/9; h = 3*h+1);

for (; h > 0; h /= 3)
for (i = h+1; i <= N; i++) {
v = a[i];
j = i;
while (j > h && a[j-h] > v) {
a[j] = a[j-h];
j -= h;
}
a[j] = v;
}
}

Shell sort requires O(N3/2) operations in the worst case, which means that it can be quite effectively used even for moderately large files (say N < 5000).

6. Quicksort

This divide and conquer algorithm is, in the average case, the fastest known sorting algorithm for large values of N. Quicksort is a good general purpose method in that it can be used in a variety of situations. However, some care is required in its implementation. Since the algorithm is based on recursion, we assume that the array (or subarray) to be sorted is stored in the memory locations a[left],a[left+1],…,a[right]. In order to sort the full array, we simply initialize the algorithm with left = 1 and right = N.

Approach

  • Partition the subarray a[left],a[left+1],…,a[right] into two parts, such that
    • element a[i] is in its final place in the array for some i in the interval [left,right].
    • none of the elements in a[left],a[left+1],…,a[i-1] are greater than a[i].
    • none of the elements in a[i+1],a[i+2],…,a[right] are less than a[i].
  • Recursively partition the two subarrays, a[left],a[left+1],…,a[i-1] and a[i+1],a[i+2],…,a[right], until the entire array is sorted.

How to partition the subarray a[left],a[left+1],…,a[right]:

  • Choose a[right] to be the element that will go into its final position.
  • Scan from the left end of the subarray until an element greater than a[right] is found.
  • Scan from the right end of the subarray until an element less than a[right] is found.
  • Exchange the two elements which stopped the scans.
  • Continue the scans in this way. Thus, all the elements to the left of the left scan pointer will be less than a[right] and all the elements to the right of the right scan pointer will be greater than a[right].
  • When the scan pointers cross we will have two new subarrays, one with elements less than a[right] and the other with elements greater than a[right]. We may now put a[right] in its final place by exchanging it with the leftmost element in the right subarray.

Here is the complete code for quicksort:

Quicksort.cpp

// inline void swap() is the same as for selection sort.

void quicksort(ItemType a[], int left, int right) {
int i, j;
ItemType v;

if (right > left) {
v = a[right];
i = left - 1;
j = right;
for (;;) {
while (a[++i] < v);
while (a[–j] > v);
if (i >= j) break;
swap(a,i,j);
}
swap(a,i,right);
quicksort(a,left,i-1);
quicksort(a,i+1,right);
}
}

Note that this code requires a sentinel key in a[0] to stop the right-to-left scan in case the partitioning element is the smallest element in the file. Quicksort requires O(N log N) operations in the average case. However, its worst case performance is O(N2), which occurs in the case of an already sorted file! There are a number of improvements which can be made to the basic quicksort algorithm.
 

  • Using the median of three partitioning method makes the worst case far less probable, and it eliminates the need for sentinels. The basic idea is as follows. Choose three elements, a[left], a[middle] and a[right], from the left, middle and right of the array. Sort them (by direct comparison) so that the median of the three is in a[middle] and the largest is in a[right]. Now exchange a[middle] with a[right-1]. Finally, we run the partitioning algorithm on the subarray a[left+1],a[left+2],…,a[right-2] with a[right-1] as the partitioning element.
  • Another improvement is to remove recursion from the algorithm by using an explicit stack. The basic idea is as follows. After partitioning, push the larger subfile onto the stack. The smaller subfile is processed immediately by simply resetting the parameters left and right (this is known as end-recursion removal). With the explicit stack implementation, the maximum stack size is about log2 N. On the other hand, with the recursive implementation, the underlying stack could be as large as N.
  • A third improvement is to use a cutoff to insertion sort whenever small subarrays are encountered. This is because insertion sort, albeit an O(N2) algorithm, has a sufficiently small constant in front of the N2 to be more efficient than quicksort for small N. A suitable value for the cutoff subarray size would be approximately in the range 5  ~ 25.

7. Choosing a Sorting Algorithm

Table 1 summarizes the performance characteristics of some common sorting algorithms. Shell sort is usually a good starting choice for moderately large files N < 5000, since it is easily implemented. Bubble sort, which is included in Table 1 for comparison purposes only, is generally best avoided. Insertion sort requires linear time for almost sorted files, while selection sort requires linear time for files with large records and small keys. Insertion sort and selection sort should otherwise be limited to small files. Quicksort is the method to use for very large sorting problems. However, its performance may be significantly affected by subtle implementation errors. Furthermore, quicksort performs badly if the file is already sorted. Another possible disadvantage is that quicksort is not stable i.e. it does not preserve the relative order of equal keys. All of the above sorting algorithms are in-place methods. Quicksort requires a small amount of additional memory for the auxiliary stack. There are a few other sorting methods which we have not considered. Heapsort requires O(N log N) steps both in the average case and the worst case, but it is about twice as slow as quicksort on average. Mergesort is another O(N log N) algorithm in the average and worst cases. Mergesort is the method of choice for sorting linked lists, where sequential access is required.

Table 1: Approximate running times for various sorting algorithms

METHOD # COMPARISONS - AVERAGE CASE # COMPARISONS - WORST CASE # EXCHANGES - AVERAGE CASE # EXCHANGES - WORST CASE
Selection sort
Insertion sort
Bubble sort
Shell sort
Quicksort
N2/2
N2/4
N2/2
~N.1.25
2 N ln N 
(1.38 N log2 N)
N2/2
N2/2
N2/2
N3/2
N2/2
N
N2/8
N2/2
?
N
N
N2/4
N2/2
?
N

Topics

  1. CVS Resources

  2. Introduction

  3. Environment Variable

  4. Setting Up a New Repository

  5. Importing a Project Into CVS

  6. Routine CVS Operations on Source Files

  7. Tagging, Branching and Merging

1. CVS Resources

Where to get CVS

The latest version of CVS can be obtained by anonymous ftp:

ftp://prep.ai.mit.edu/pub/gnu/

CVS references on the Web

Concurrent Versions System

CVS Index

2. Introduction

CVS (Concurrent Versions System) is a version control system which allows multiple software developers to work on a project concurrently. It maintains a single master copy of the source-code, which is called the source repository. Individual developers may obtain a working copy by checking out a snapshot of the source repository. This working copy can be edited without affecting other developers, and once the changes are complete, CVS assists in merging the changes into the source repository. CVS supports parallel development efforts through branches, and it provides mechanisms for merging these branches back together when desired. It also provides the facility to tag the state of the directory tree at any given point so that that state can be recreated at a later time.

CVS runs under both Unix® and Windows® 95/NT environments, and it can be run as a client-server application with the source repository residing on a central Unix® server and the clients running on either Unix® or Windows® machines. The source repository may be accessed using a command line interface or through a web interface. Security provisions include simple password protection as well as Kerberos encryption.

CVS is really a front end to the slightly more primitive RCS revision control system.

3. Environment Variable

The CVSROOT environment variable needs to be set up to point to the source repository e.g.

setenv CVSROOT /mit/1.124/mysrc

Note

For a CVS password server running on a remote machine with the source repository located in /mysrc, the environment variable is set as follows:

setenv CVSROOT :pserver:USER@HOSTNAME:/mysrc

in which case, the user must log in using

cvs login

For a Kerberos-authenticated server, we would need to use kserver instead of pserver.

4. Setting Up a New Repository

Once the CVSROOT environment has been set to point to the desired location, we can create a new repository using

cvs init

This operation only needs to be done once.

5. Importing a Project Into CVS

A project which was previously not under CVS control can be imported into the CVS repository using the cvs import command. Be sure to change directory to the top level project directory first.

e.g.

cd ~/MyProject
cvs import -m"Importing project into CVS." Projects/MyProject vendortag releasetag

will create the directory $CVSROOT/Projects/MyProject in the CVS repository, and import the local project files into this directory. vendortag could be your name, and releasetag could be the string “start”.

This operation only needs to be done once.

6. Routine CVS Operations on Source Files

Checking out the Project

cvs co Projects/MyProject

will make a local working copy of the repository files, with the same directory structure. There are several variants on this command. For example, instead of checking out a directory, one can check out a CVS module, which defines a collection of files and directories. CVS modules are defined in the $CVSROOT/modules file.

One can also check out a specific revision or tag using the -r option e.g.

cvs co -r1.3 Project/MyProject/myfile.C

cvs co -rMyTag Project/MyProject

It is also possible to check out the project as of a particular date and time:

cvs co -D"11/23/97 16:00:00 EST" Project/MyProject

This command does not affect the source repository.

Updating a File

Before a set of changes can be committed to the source respository, all files must be brought up to date. Files can become out of date if someone else commited changes after the working copy was checked out. To update all files in the project, type

cd Project/MyProject
cvs up

Individual files may also be updated e.g.

cvs up myfile.C

The update command will not throw away any local changes that have been made. Instead it will attempt to merge them with the changes that were retrieved from the repository. In some cases, thr merge will fail and CVS will report a conflict. If this happens, the conflicts will have to be resolved by editing the portions of the source file that are in conflict. Conflicts can be detected by searching for the <<< sequence.

The -r option can also be used with the update command.

cvs up -r1.3 myfile.C

Note that the -r option to cvs co or cvs up will cause a sticky tag to be set (you can check using cvs stat.) To remove the sticky tag, and update to the latest revision, use

cvs up -A myfile.C

This command does not affect the source repository.

Looking at Differences

You can examine the local changes you are about to commit using the cvs diff command. e.g.

cvs diff myfile.C

This command does not affect the source repository.

Committing Changes to the Source Repository

Once you are sure that your changes are ready to be committed, use the cvs commit command. e.g.

cvs commit -m"This is my log message." myfile.C

This command affects the source repository!

Examining the Version History of a File

Use cvs log to examine the log entries for a particular file. e.g.

cvs log myfile.C

This command does not affect the source repository.

Examining the Status of a File

The current status of a file can be examined using the cvs stat command. This is useful for checking whether or not the file is up to date.

cvs stat myfile.C

This command does not affect the source repository.


Adding a New File

A new file can be added to the project using the cvs add command. Unlike cvs import, the cvs add command should be used when the project is already under CVS control, and has already been checked out. e.g.

cd Project/MyProject
cvs add newfile.C 
cvs commit -m"Added a new file." newfile.C

This command affects the source repository!

Removing an Unwanted File

If a file is no longer necessary, it can be removed from the project using the cvs rm command. e.g.

cd Project/MyProject 
cvs rm oldfile.C 
cvs commit -m"Removed an unwanted file." oldfile.C

This causes the file to be retired. It will still be kept in the source repository in a subdirectory called Attic, in case it ever needs to be resurrected.

This command affects the source repository!

7. Tagging, Branching and Merging

Tagging

Once the source tree has reached a stable state, it is a good idea to tag the tree, so that the stable state can be recreated. e.g.

cd Project/MyProject 
cvs tag Release_1_0

The tree can then continue to be modified, but the stable state can always be recovered using

cvs co -rRelease_1_0 Project/MyProject

Branching

A parallel branch can be created by using the cvs tag command with the -b option. e.g.

cd Project/MyProject 
cvs tag BigExperiment_BASE 
cvs tag -b BigExperiment_BRANCH

Developers who wish to work on the branch will then check out the branch:

cvs co -rBigExperiment_BRANCH Project/MyProject

All changes will then be committed to the branch. In the meantime, development can also proceed on the trunk by doing a normal check out without the -r option.

Merging

The changes on the branch can be merged back with the trunk as follows:

cvs co Project/MyProject 
cvs tag BeforeBigMerge 
cvs up -jBigExperiment_BRANCH 
<Resolve any conflicts here.> 
cvs commit -m"Merged in the big experiment."

The merging process should be handled with care, since it is easy to make mistakes. It is possible to do more complicated merges, such as merging just a portion of the branch with the trunk. For more details, visit one of the CVS resources listed above.

Static Member Data and Static Member Functions

(Ref. Lippman 13.5)

The Point class that we have developed has both member data (properties) and member functions (methods). Each object that we create will have its own variables mfX, and mfY, whose values can vary from one Point to another. In order to access a member function, we must have created an object first e.g. if we want to write

a.print();

then the object a must already exist.

Suppose now that we wish to have a counter that will keep track of the number of Point objects that we create. It does not make sense for each Point object to have its own copy of the counter, since the counter will have the same value regardless of which object we are referring to. We would rather have a single integer variable that is shared by all objects of the class. We can do this by creating a static member variable as the counter. What if we wish to provide a member function to query the counter? We would not be able to access the member function unless we have created at least one object. We would rather have a function that is associated with the class itself and not with any object. We can do this by creating a static member function. The following example illustrates this.

point.h

// Declaration of class Point.

#ifndef _POINT_H_
#define _POINT_H_

#include <iostream.h>

class Point {
// The state of a Point object. Property variables are typically
// set up as private data members, which are read from and
// written to via public access methods.
private:
float mfX;
float mfY;
static int miCount;

// The behavior of a Point object.
public:
Point(float fX=0, float fY=0);        // A constructor that takes two floats.
~Point();                                         // The destructor.
static int get_count();
// …
};

#endif // _POINT_H_

point.C

// Definition of class Point.

#include “point.h”

// Initialize the counter.
int Point::miCount = 0;

// A constructor which creates a Point object from two floats.
Point::Point(float fX, float fY) {
cout << “In constructor Point::Point(float,float)” << endl;
mfX = fX;
mfY = fY;
miCount++;
}

// The destructor.
Point::~Point() {
cout << “In destructor Point::~Point()” << endl;
miCount–;
}

// Accessor for the counter variable.
int Point::get_count() {
return miCount;
}

point_test.C

#include “point.h”

int main() {
cout << Point::get_count() << endl;  // We don’t have any Point objects yet!

Point a;
Point *b = new Point(1.0, 2.0);

    cout << b->get_count() << endl;      // This is allowed, since *b exists.

delete b;

cout << a.get_count() << endl;           // This is allowed, since a exists.

return 0;
}

Topics

  1. Introduction
  2. Function Templates
  3. Class Templates

1. Introduction

Templates allow us to write functions and classes that are based on parameterized types. For example, we may wish to write a function or class to run quicksort on (1) an array of _int_s and (2) an array of _float_s. Rather than writing two separate versions, one for _int_s and one for _float_s, we may write a single generic template from which the compiler can generate the int and float versions of quicksort.

2. Function Templates

The following example illustrates how to use function templates.

FunctionTemplates.cpp

#include <iostream.h>

// A function template for creating functions that reverse the order of the elements in an array.
template<typename ItemType>
void reverse(ItemType a[], int N) {
for (int i = 0; i < N/2; i++) {
ItemType tmp = a[i];
a[i] = a[N-1-i];
a[N-1-i] = tmp;
}
}

// A function template, where the type cannot be inferred from the function arguments.
template<typename ItemType>
void print(void *p, int N) {
ItemType *a = (ItemType *)p;

for (int i = 0; i < N; i++)
cout << “Element " << i << " is " << a[i] << endl;
}
 

// Optional: you are allowed to explicitly instantiate the function templates, if you wish. If you don’t
// do this, the instantiation will occur implicitly as a result of the functions calls below.
template void print<int>(void *, int);
template void print<float>(void *, int);

const int aLength = 5;
const int bLength = 10;

int main() {
int i;
int a[aLength];
float b[bLength];

for (i = 0; i < aLength; i++)
a[i] = i;

for (i = 0; i < bLength; i++)
b[i] = (float)i;

// The compiler will create two versions of reverse(), one to handle ints and one to handle floats.
// In this case, ItemType can be inferred from the first argument.
reverse(a, aLength);
reverse(b, bLength);

// The compiler will create two versions of print(), one to handle ints and one to handle floats.
// In this case, ItemType cannot be inferred from the function arguments. Hence, explicit
// specification of the parameter is required. (VC++ users note: VC++ 6.0 has a bug which
// causes it to use the float version in both cases.)
print<int>((void *)a, aLength);
print<float>((void *)b, bLength);

return 0;
}

3. Class Templates

The following example illustrates how to use class templates.

ArrayClass.h

#include <iostream.h>

// This class template allows us to create array objects of any type and size.
template<typename ItemType, int size>
class ArrayClass {
private:
ItemType array[size];

public:
ArrayClass();
~ArrayClass() {}
void print();
};
 

// In a class template, all member function definitions should be placed in the header file.

template<typename ItemType, int size>
ArrayClass<ItemType, size>::ArrayClass() {
for (int i = 0; i < size; i++) {
array[i] = (ItemType)(i/2.0);   // The chosen default behavior.
}
}

template<typename ItemType, int size>
void ArrayClass<ItemType, size>::print() {
for (int i = 0; i < size; i++) {
cout << array[i] << endl;
}
}
 

Main.cpp

#include “ArrayClass.h”

int main() {
ArrayClass<int, 5> a;
ArrayClass<float, 10> b;

a.print();
cout << endl;
b.print();

return 0;
}

Topics

  1. Loading and Displaying Images
  2. Tracking Image Loading
  3. Image Animations
  4. Examples

1. Loading and Displaying Images

(Ref. Java® Tutorial__)

Images provide a way to augment the aethetic appeal of a Java program. Java® provides support for two common image formats: GIF and JPEG. An image that is in one of these formats can be loaded by using either a URL or a filename.

The basic class for representing an image is java.awt.Image. Packages that are relevant to image handling are java.applet, java.awt and java.awt.image.
 

Loading an Image

Images can be loaded using the getImage() method. There are several versions of getImage(). When we create an applet by subclassing javax.swing.JApplet, we inherit the following methods from java.awt.Applet.

  • Image getImage(URL url)
  • Image getImage(URL url, String name)

These methods only work after the applet’s constructor has been called. A good place to call them is in the applet’s init() method. Here are some examples:

// In a method in an Applet subclass, such as the init() method:
Image image1 = getImage(getCodeBase(), “imageFile.gif”);
Image image2 = getImage(getDocumentBase(), “anImageFile.jpeg”);
Image image3 = getImage(new URL(“http://java.sun.com/graphics/people.gif"));

In the first example, the code base is the URL of the directory that contains the applets .class file. In the second example, the document base is the URL of the directory containing the HTML document that loads the applet.

Alternatively, we may use the getImage() methods provided by the Toolkit class.

  • Image getImage(URL url)
  • Image getImage(String filename)

This approach can be used either in an applet or an application. e.g.

Toolkit toolkit = Toolkit.getDefaultToolkit();
Image image1 = toolkit.getImage(“imageFile.gif”);
Image image2 = toolkit.getImage(new URL(“http://java.sun.com/graphics/people.gif"));

In general, applets cannot read files that are on the local machine for reasons of security. Thus, applets typically download any images they need from the server.

Note that getImage() returns immediately without waiting for the image to load. The image loading process occurs lazily, in that the image doesn’t start to load until the first time we try to display it.
 

Displaying an Image

Images can be displayed by calling one of the drawImage() methods supplied by the Graphics object that gets passed in to the paintComponent() method.

This version draws an image at the specified position using its natural size:

boolean drawImage(Image img, int x, int y, ImageObserver observer)

This version draws an image at the specified position, and scales it to the specified width and height:

boolean drawImage(Image img, int x, int y, int width, int height, ImageObserver observer)

The ImageObserver is a mechanism for tracking the loading of an image (see below). One of the uses for an ImageObserver is to ensure that the image is properly displayed once it has finished loading. The return value from drawImage() is rarely used: this value is true if the image has been completely loaded and thus completely painted, and false otherwise.

Here is a simple example of loading and displaying images.

import java.awt.*;
import java.awt.event.*;
import javax.swing.*;

// This applet displays a single image twice,
// once at its normal size and once much wider.

public class ImageDisplayer extends JApplet {
static String imageFile = “images/rocketship.gif”;

public void init() {
Image image = getImage(getCodeBase(), imageFile);
ImagePanel imagePanel = new ImagePanel(image);
getContentPane().add(imagePanel, BorderLayout.CENTER);
}
}

class ImagePanel extends JPanel {
Image image;

public ImagePanel(Image image) {
this.image = image;
}

public void paintComponent(Graphics g) {
super.paintComponent(g);  // Paint background

// Draw image at its natural size first.
g.drawImage(image, 0, 0, this); //85x62 image

// Now draw the image scaled.
g.drawImage(image, 90, 0, 300, 62, this);
}
}
 

2. Tracking Image Loading

The most frequent reason to track image loading is to find out when an image or group of images is fully loaded. At a minimum, we will want to make sure that each image is redrawn after it finishes loading, otherwise only a part of the image will be visible.  We may even wish to wait until image loading is complete, before attempting to do any drawingat all. There are two ways to track images: using the MediaTracker class and by implementing the ImageObserver interface.

Media Trackers

(Ref. Java® Tutorial)

The MediaTracker class provides a relatively simple way to delay drawing until the image loading process is complete. We can modify the ImageDisplayer applet to perform the following steps:

  • Create a MediaTracker object.
  • Add the image to it using the addImage() method. (If we had several images, they would all be added to the same MediaTracker.)
  • Use the waitForAll() method to load the image data synchronously when the program starts up.
  • Use the checkAll() method in the paintComponent() method to test whether image loading is complete.

import java.awt.*;
import java.awt.event.*;
import javax.swing.*;

// This applet displays a single image twice,
// once at its normal size and once much wider.

public class ImageDisplayer extends JApplet {
static String imageFile = “images/rocketship.gif”;

public void init() {
Image image = getImage(getCodeBase(), imageFile);

// Create a media tracker and add the image to it.  If we had several
// images to load, they could all be added to the same media tracker.
MediaTracker tracker = new MediaTracker(this);
tracker.addImage(image, 0);

// Start downloading the image and wait until it finishes loading.
try {
tracker.waitForAll();
}
catch(InterruptedException e) {}

ImagePanel imagePanel = new ImagePanel(image, tracker);
getContentPane().add(imagePanel, BorderLayout.CENTER);
}
}

class ImagePanel extends JPanel {
Image image;
MediaTracker tracker;

public ImagePanel(Image image, MediaTracker tracker) {
this.image = image;
this.tracker = tracker;
}

public void paintComponent(Graphics g) {
super.paintComponent(g);  // Paint background

// Check that the image has loaded before trying to draw it.
if (!tracker.checkAll()) {
g.drawString(“Please wait…”, 0, 0);
return;
}

// Draw image at its natural size first.
g.drawImage(image, 0, 0, this); //85x62 image

// Now draw the image scaled.
g.drawImage(image, 90, 0, 300, 62, this);
}
}
 

Image Observers

Image observers provide a way to track image loading even more closely. In order to track image loading, we must pass in an object that implements the ImageObserver interface as the last argument to the Graphics object’s drawImage() method. The ImageObserver interface has a method named

imageUpdate(Image img, int flags, int x, int y, int width, int height)

which will be called whenever an interesting milestone in the image loading process is reached. The flags argument can be examined to determine exactly what this milestone is. The ImageObserver interface defines the following constants, against which the flags argument can be tested using the bitwise AND operator:

public static final int WIDTH;
public static final int HEIGHT;
public static final int PROPERTIES;
public static final int SOMEBITS;
public static final int FRAMEBITS;
public static final int ALLBITS;
public static final int ERROR;
public static final int ABORT;

The java.awt.Component class implements the ImageObserver interface and provides a default version of imageUpdate(), which calls repaint() when the image has finished loading. The following example shows how we could modify the ImageDisplayer applet, so that the ImagePanel class provides its own version of imageUpdate() instead of using the one that it inherits from java.awt.Component. Note that we pass this as the last argument to drawImage().
 

import java.awt.*;
import java.awt.event.*;
import java.awt.image.ImageObserver;
import javax.swing.*;

// This applet displays a single image twice,
// once at its normal size and once much wider.

public class ImageDisplayer extends JApplet {
static String imageFile = “images/rocketship.gif”;

public void init() {
Image image = getImage(getCodeBase(), imageFile);
ImagePanel imagePanel = new ImagePanel(image);
getContentPane().add(imagePanel, BorderLayout.CENTER);
}
}

class ImagePanel extends JPanel implements ImageObserver {
Image image;

public ImagePanel(Image image) {
this.image = image;
}

public void paintComponent(Graphics g) {
super.paintComponent(g);  // Paint background

// Draw image at its natural size first.
g.drawImage(image, 0, 0, this); //85x62 image

// Now draw the image scaled.
g.drawImage(image, 90, 0, 300, 62, this);
}

public boolean imageUpdate(Image image, int flags, int x, int y,
int width, int height) {
// If the image has finished loading, repaint the window.
if ((flags & ALLBITS) != 0) {
repaint();
return false;  // Return false to say we don’t need further notification.
}
return true;       // Image has not finished loading, need further notification.
}
}
 

3. Image Animations

(Ref. Java® Tutorial: Moving an Image Across the Screen and Displaying a Sequence of Images)

Moving an Image Across the Screen

The simplest type of image animation involves moving a single frame image across the screen. This is known as cutout animation, and it is accomplished by repeatedly updating the position of the image in an animation thread, in a similar fashion to the bouncing ball animation we saw earlier.

Displaying a Sequence of Images

Another type of image animation is cartoon style animation, in which a sequence of image frames is displayed in succession. The following example does this by creating an array of ten Image objects and then incrementing the array index every time the paintComponent() method is called.The portion of code that is of main interest is:

// In initialization code.
Image[] images = new Image[10];
for (int i = 1; i <= 10; i++) {
images[i-1] = getImage(getCodeBase(), “images/duke/T”+i+".gif”);
}

// In the paintComponent method.
g.drawImage(images[ImageSequenceTimer.frameNumber % 10], 0, 0, this);

This is a good example of why a MediaTracker  should be used to delay drawing until after all the images have loaded.   

4. Examples

Here are some more examples of image animation:

Check out Code Samples and Applets for other interesting applets.

Course Info

Learning Resource Types
Exams with Solutions
Presentation Assignments
Programming Assignments with Examples
Written Assignments