Introduction
Next: Programming Language Syntax
Next: Programming Language Syntax
1.1-1.3 Check Your Understanding
- What is the difference between machine language and assembly language?
- In what way(s) are high-level languages an improvement on assembly language? Are there circumstances in which it still make sense to program in assembler?
- Why are there so many programming languages?
- What makes a programming language successful?
- Name three languages in each of the following categories: von Neumann, functional, object-oriented. Name two logic languages. Name two widely used concurrent languages.
- What distinguishes declarative languages from imperative languages?
- What organization spearheaded the development of Ada?
- What is generally considered the first high-level programming language?
- What was the first functional language?
- Why aren’t concurrent languages listed as a separate family in Figure 1.1?
1.4-1.5 Check Your Understanding
- Explain the distinction between interpretation and compilation. What are the comparative advantages and disadvantages of the two approaches?
- Is Java compiled or interpreted (or both)? How do you know?
- What is the difference between a compiler and a preprocessor?
- What was the intermediate form employed by the original AT&T C++ compiler?
- What is P-code?
- What is bootstrapping?
- What is a just-in-time compiler?
- Name two languages in which a program can write new pieces of itself “on the fly.”
- Briefly describe three “unconventional” compilers—compilers whose purpose is not to prepare a high-level program for execution on a general-purpose processor.
- List six kinds of tools that commonly support the work of a compiler within a larger programming environment.
- Explain how an integrated development environment (IDE) differs from a collection of command-line tools.
1.6 Check Your Understanding
- List the principal phases of compilation, and describe the work performed by each.
- List the phases that are also executed as part of interpretation.
- Describe the form in which a program is passed from the scanner to the parser; from the parser to the semantic analyzer; from the semantic analyzer to the intermediate code generator.
- What distinguishes the front end of a compiler from the back end?
- What is the difference between a phase and a pass of compilation? Under what circumstances does it make sense for a compiler to have multiple passes?
- What is the purpose of the compiler’s symbol table?
- What is the difference between static and dynamic semantics?
- On modern machines, do assembly language programmers still tend to write better code than a good compiler can? Why or why not?
1.8 Exercises
1.1
Errors in a computer program can be classified according to when they are detected and, if they are detected at compile time, what part of the compiler detects them. Using your favorite imperative language, give an example of each of the following.
- (a) A lexical error, detected by the scanner
- (b) A syntax error, detected by the parser
- (c) A static semantic error, detected by semantic analysis
- (d) A dynamic semantic error, detected by code generated by the compiler
- (e) An error that the compiler can neither catch nor easily generate code to catch (this should be a violation of the language definition, not just a program bug)
1.2
Consider again the Pascal tool set distributed by Niklaus Wirth (Example 1.15). After successfully building a machine language version of the Pascal compiler, one could in principle discard the P-code interpreter and the P-code version of the compiler. Why might one choose not to do so?
1.3
Imperative languages like Fortran and C are typically compiled, while scripting languages, in which many issues cannot be settled until run time, are typically interpreted. Is interpretation simply what one “has to do” when compilation is infeasible, or are there actually some advantages to interpreting a language, even when a compiler is available?
1.4
The gcd program of Example 1.20 might also be written
int main() {
int i = getint(), j = getint();
while (i != j) {
if (i > j) i = i % j;
else j = j % i;
}
putint(i);
}Does this program compute the same result? If not, can you fix it? Under what circumstances would you expect one or the other to be faster?
1.5
Expanding on Example 1.25, trace an interpretation of the gcd program on the inputs 12 and 8. Which syntax tree nodes are visited, in which order?
1.6
Both interpretation and code generation can be performed by traversal of a syntax tree. Compare these two kinds of traversals. In what ways are they similar/different?
1.7
In your local implementation of C, what is the limit on the size of integers? What happens in the event of arithmetic overflow? What are the implications of size limits on the portability of programs from one machine/compiler to another? How do the answers to these questions differ for Java? For Ada? For Pascal? For Scheme? (You may need to find a manual.)
1.8
The Unix make utility allows the programmer to specify dependences among the separately compiled pieces of a program. If file A depends on file B and file B is modified, make deduces that A must be recompiled, in case any of the changes to B would affect the code produced for A. How accurate is this sort of dependence management? Under what circumstances will it lead to unnecessary work? Under what circumstances will it fail to recompile something that needs to be recompiled?
1.9
Why is it difficult to tell whether a program is correct? How do you go about finding bugs in your code? What kinds of bugs are revealed by testing? What kinds of bugs are not? (For more formal notions of program correctness, see the bibliographic notes at the end of Chapter 4.)