Why not use Numerical Recipes?

(compiled by W. Van Snyder)

We have found Numerical Recipes to be generally unreliable.

In broad outline, the reason is that Numerical Recipes values simplicity above other virtues that may frequently be more important. Complex problems frequently have complex solutions, or require complex processes to arrive at any solution whatever. This is not a new insight: H. L Mencken (1880-1956) wrote

... there is always a well-known solution to every human problem -- neat, plausible, and wrong.
Prejudices, second series (1920), or equivalently
For every complex problem, there is a solution that is simple, neat, and wrong.
These lessons are, however, too frequently forgotten, and appear to have been forgotten in the instance of the planning and execution of Numerical Recipes. More specific reasons not to use Numerical Recipes are outlined below. There are excellent alternatives.

Paraphrased from a previously published article

Charles Lawson, Numerical Recipes: A boon for scientists and engineers, or not?, JPL ICIS Newsletter 9, 1 (January 1991)

"There is a series of books and associated software with the name Numerical Recipes in the titles that provide descriptions of numerical algorithms and associated programs in popular programming languages....

"The good news is that this series gives exceptionally broad coverage of computational topics that arise in scientific and engineering computing at a very reasonable price.... The bad news is that the quality and reliability of the mathematical exposition and the codes it contains are spotty. It is not safe, we have found, to take discussions in the book as authoritative or to use the codes with confidence in the validity of the results.

"The authors are identified on the book jacket as `leading scientists' and [we] have no reason to think that they are not. However, there is no claim that they have special competence in numerical analysis or mathematical software. At least in the parts of the book that [we] have studied closely, they do not demonstrate any such competence.

"Published reviews of the book[s] have fallen into two classes: Testimonials and reviews by scientists [including Kenneth Wilson, Nobel Laureate] and engineers tend to extol the broad scope and convenience of the products, without seriously evaluating the quality, while reviews by numerical analysts are very critical of the quality of the discussions and the codes....

"Two reviews by numerical analysts are:

  1. Lawrence F. Shampine, Review of `Numerical Recipes, The Art of Scientific Computing', The American Mathematical Monthly 94, 9 (Nov 87) 889-892.
  2. Richard J. Hanson Cooking with `Numerical Recipes' on a PC, SIAM (Society for Industrial and Applied Mathematics) News 28, 3 (May 90) 18.
"Professor Shampine is a specialist in the numerical solution of ordinary differential equations (ODEs). He gives specific criticisms of chapter 15, which deals with ODEs, and says, in summary:

`This chapter describes numerical methods for ODE's from the viewpoint of 1970. If the authors had consulted an expert in the subject or read one of the good survey articles available, I think they would have assessed the methods differently and presented more modern versions of the methods.'
[Since Shampine wrote this, the authors of NR have consulted a worker active in the field. Unfortunately, a great many other experts in the field consider the advice they got to be very poor indeed -- extrapolation methods are almost always substantially inferior to Runge-Kutta, Taylor's series, or multistep methods.]

"He also remarks that adaptive methods for numerical quadrature problems are not treated in NR although they are much in favor by numerical analysts.

"Dr. Hanson is a former editor of the algorithms department of the Association for Computing Machinery Transactions on Mathematical Software (ACM TOMS). He ran tests of the nonlinear least-squares codes from NR and made comparisons with published results of better known codes LMDIR from MINPACK and NL2SOL.... He found the NR codes sometimes required up to 20 times as many iterations as the comparison codes. He noted that the control of the Levenberg-Marquardt damping parameter λ was not sufficiently sophisticated, permitting overflow or underflow of λ to occur... the algorithm in NR is a very bare-bones implementation of the ideas presented in the referenced 1963 paper by Marquardt. Many significant enhancements of that idea have been given in the intervening 27 years. [We] would expect the codes LMDIR, NL2SOL, and their successors to be much more EFFICIENT AND RELIABLE [editor's emphasis].

"[Our] present attention to the NR products was initiated by calls for consultation .... Two involved the topics mentioned above.... Other calls led us to scrutinize Sections 6.6, Spherical Harmonics and 14.6, Robust Estimation

"...The discussion, algorithms, and code given in section 6.6 is internally consistent and the choices of the recursions to use in computing the associated Legendre functions are ones recommended by specialists in the topic as being stable. No warning is given, however, regarding the fact that there are a number of alternative conventions in use regarding signs and normalization factors.... [If one naively combined results from NR codes with results from other sources] one would probably obtain incorrect results.

"In reading the section on robust estimation, [we were] skeptical of Figure 14.6.1(b) that shows a `robust straight-line fit' looking substantially `better' than a `least-squares fit'....

"To check [our] doubts about this figure, [we] enlarged it and traced the points and the `fitted' lines onto graph paper to obtain data with which [to] experiment....

"We computed a least squares fit.... The particular `robust' method illustrated by figure 14.6.1(b) is not identified. However, since the only method for which NR attempts to give code in this area is L1 fitting, [we] computed an L1 fit to the data as an example of a `robust' fit.... [We] used a subroutine CL1, that was published in the algorithms department of ACM TOMS in 1980, to obtain an L1 fit in which [we] could have confidence. [We] also applied the NR code MEDFIT to the data and obtained a fit that agreed with the CL1 fit to about three decimal places.

..."As expected, the least-squares fit is not as far from the visual trend as in figure 14.6.1(b) and the L1 fit is not as close.... It appears that the lines labelled `fits' in the NR figure 14.6.1(b) are not the result of any computed fitting at all, but are just suggestive lines drawn by the authors to buttress their enthusiasm for `robust' fitting. An uncritical reader would probably incorrectly assume that figure 14.6.1(b) illustrates the performance of actual algorithms.

"The objective function in an L1 fitting problem is not differentiable at parameter values that cause the fitted line to interpolate one or more data points. The authors indicate some awareness of this fact but not of all its consequences for a solution algorithm. Typically, the solution to this problem will interpolate two or more data points, and in the authors' algorithm, it would be common for trial fits in the course of execution of the algorithm to interpolate at least one data point;... suffice it to note that it is easy to produce data sets for which the MEDFIT/ROFUNC code will fail.

"One data set that causes looping is [x = 1, 2, 3; y = 1, 1, 1]. Another that causes looping in a different part of the code is [x = 2, 3, 4; y = 1, 3, 2]. A data set on which the code terminates, but with a significantly wrong result is [x = 3, 4, 5, 6, 7; y = 1, 3, 2, 4, 3]. Because of the faulty theoretical foundation, there is no reason to believe any particular result obtained by this code is correct, although by chance it will sometimes get a correct result....

"Conclusions

"The authors of Numerical Recipes were not specialists in numerical analysis or mathematical software prior to publication of this book and its software, and this deficiency shows WHENEVER WE TAKE A CLOSE LOOK AT A TOPIC in the book [editor's emphasis]. The authors have attempted to cover a very extensive range of topics. They have basically found `some' way to approach each topic rather than finding one of the best contemporary ways. In some cases they were apparently not aware of standard theory and algorithms, and consequently devised approaches of their own. The MEDFIT code of section 14.6 is a particularly unfortunate example of this latter situation.

"One should independently check the validity of any information or codes obtained from `Numerical Recipes'...."

Editor's remarks

(12 Mar 1991)
I have independently checked the codes for Bessel Functions and Modified Bessel Functions of the first kind and orders zero and one (J0, J1, I0, I1). The approximations given in NR are those to be found in the National Bureau of Standards Handbook of Special Functions, Applied Mathematics Series 55, which were published by Cecil Hastings in 1959. Although the approximations are accurate, they are not very precise: don't trust them beyond 6 digits. Coding them in "double precision" won't help. Much work has been done in the approximation of special functions in the last 32 years.

We haven't investigated the quality of every one of the NR algorithms and codes, nor the exposition in every chapter of NR (we have more productive things to do). But sampling randomly (based on calls for consultation) in four areas, and finding ALL FOUR faulty, we have very little confidence in the rest.

Received in the mail, in response to USENET postings:

(All but one of the following were sent to me before May 1993. The authors of Numerical Recipes have updated many routines -- and not updated others. So some of the problems noted here may already have been corrected. The authors maintain a collection of patches and bug reports.)

(27 Nov 1991)
"You can add the section on PDE's to the list of `bad'. The discussion of relaxation solvers for elliptic PDE's starts off OK (in about 1950, but that is OK for a naive user if he is not in a hurry) but then fails to mention little details like boundary conditions! Their code has the implicit assumption that all elliptic problems have homogeneous Dirichlet boundary conditions!

"Then they have their little coding quirks, like accessing their arrays the wrong way and putting unnecessary IF and MOD statements inside of inner loops....

"On the other hand, I did learn something from their discussion of the Conjugate Gradient technique for solving systems of linear equations. I did not like their implementation, but the discussion was OK."

And:

(27 Nov 1991)
"Your posts in sci.physics about Numerical Recipes match my experience. I've found that Numerical Recipes provide just enough information for a person to get himself into trouble, because after reading NR, one thinks that one understands what's going on. The one saving grace of NR is that it usually provides references; after one has been burned enough times, one learns to go straight to the references :-).

"Example: Section 9.5 claims that Laguerre's method, used for finding zeros of a polynomial, converges from any starting point. According to Ralston and Rabinowitz, however, this is only true if all the roots of the polynomial are real. For example, Laguerre's method runs into difficulty for the polynomial f(x) = x^n + 1 if the initial guess is 0, because f'(0) = f''(0) = 0."

And:

(27 Nov 1991)
"As an aside, I have just received a preprint from Press describing what looks like chapter 18 for NR -- about the discrete wavelet transform. Now, I can tell you that this stuff is wrong, as the results which are in his figures are not reconstructable using his routines. Don't know why yet, but it just doesn't work. If anyone out there is using these routines -- toss them. If you have fixed these routines or have other discrete wavelet transform routines, I want to know about it. Thanks."

And:

(27 Nov 1991)
" ... And so let me offer my personal caveat: SVDCMP does not always work. I found one example where the result is just wrong (fortunately it is easy to check, but one doesn't usually do so). I translated the NR Fortran to C, and also tried the NR C code. Both wrong in the same way. I tried IMSL and Linpack in Fortran, and tried translating Linpack to C; all three produced correct answers...."

And:

(22 Apr 1992)
"The NR-recommended random number generators RAN1 and RAN2 should not be used for any serious application. If you use the top bit of RAN1 to create a discrete random walk (plus or minus 1 with equal probability) of length 10,000, the variance will be around 1500, far below the desired value of 10,000.

"Both are low-modulus generators with a shuffling buffer, in one case with the bottom bits twiddled with another low-modulus generator. The moduli are just too low for serious work, and the resulting generators even out too well."

And:

(23 Apr 1992)
"... It seems that everyone I talk to has a different part of the book that they don't like (The part I hate most is the section on simulated annealing and the travelling salesman's problem- there are far better approaches to the problem.)"

And:

(1 Mar 1993)
"Yes, I was another numerical babe in the woods, told the NR was the ultimate word (obviously by professors and colleagues who had never used it!) and so I spent months trying to figure out why their QL decomposition routine didn't work. I thought it was me...."

And:

(1 Mar 1993)
"May be your `collected horror stories' will support my bad experience with the FFT code: Please send these stories to me !"

And:

(26 Feb 1993 Stephan Neuhaus -- attributed with permission)
"I recently encountered a bug in Numerical Recipes in C (dunno what edition). Look at the code for "ksprob()". When the routine fails to converge after so many iterations, they return 0.0. But a quick glance at the formula reveals that the sum will not converge for small λ (and they must be small indeed), hence the correct value to return is 1.0, not 0.0.

"Additionally, it would be nice to caution users that this formula is only an asymptotic approximation to the true function (which nobody, apparently, has figured out yet), and that the method is horribly unstable for small λ."

According to Stephan, this bug has been fixed in the newest (as of 30 May 2002) edition. He continues (30 May 2002):

As an aside, even if some of the complaints on your web site are out of date and have long been fixed, I still think that it acts as a necessary counterweight to the almost unanimous praise that is heaped on NR.

And:

(30 Jun 1992)
"My experience with NR is brief - I used HEAPSORT and it crashed when I attempted to sort a vector of length one. Easy to fix
- just add IF(N.LE.1) RETURN
- but indicates that the code is not as good as a reliable subroutine library, more just a pedagogic text."

And:

(13 May 1996)
"Your web pages on NR are a valuable public service. My own horror story involves the Marquardt routine, which I've found to be totally unreliable in giving the right answer. It works for the extremely simple example given in the book, but when asked to fit a simple growth function, it gave a wrong answer somewhere near the starting values. A colleague and I wrote to Press et al. with these findings, but received a letter saying, essentially, that we must have made a mistake. I can assure you that our results were checked against several standard statistical packages (SAS and Systat), as well as against an older Fortran program, and NR gives the wrong answer with no warning or other indication of a mishap.

"I could add that the NR treatment of the polytope (or "simplex") algorithm AMOEBA has a major flaw that I am aware of. NR does acknowledge the tendency of this algorithm to stop at local minima. They recommend restarting it to reduce this tendency, but the driver routine in the "NR Example Book" (1988 version) omits this aspect. I have run into several applications that routinely give the wrong answer because the authors missed the explanation tucked at the end of the NR text or relied on the NR driver routine."

And:

(24 October 1996)
I've been doing an integration with the IDL routine QROMO, which is an open Romberg algorithm based on the Numerical Recipes routine of the same name. I called it with eps=1.e-7 to ensure smooth behavior. However, as I gradually changed the parameters affecting the integrand, I found the result jumped abruptly by a factor of 1.002. In other words, the routine underestimated the error by a factor of 2e4.

I've always liked Romberg integration just because it is often so fast. But I've found similar problems on occasion with the Numerical Recipes Fortran routine. The problem seemed to be that the routine will occasionally get a lucky guess as to the answer, and return prematurely. In the IDL routine I tried setting K=8 instead of K=5 and so far have not have a problem (haven't tested much yet though!) I've also tried to fix this in the past by requiring two successive good guesses. But does anyone have another suggestion?

And:

(7 May 1997) .... I'm currently playing with some of the Numerical Recipes Fourier transform code. I used the TWOFFT routine to transform two sets of data, but was puzzled by the fact that the zero frequency term was identical for both sets, for no obvious reason. I then ran the FOUR1 routine for just one of the sets, and although the non-zero frequency terms appear to be the same (over the limited set I sampled), the zero frequency term is definitely different....

.... in the case of the Fourier transform stuff, it might be wise for someone else to confirm my result. I'm sufficiently inexperienced with FFTs to not be sure if the result I saw was an artifact of my usage and/or understanding of how it's supposed to work.

On those few occasions when I used Numerical Recipes as a starting point for code that I incorporated into my own library, I performed extensive testing to make sure there weren't any ways to cause crashes (like division by zero), and invariably I'd find holes that needed to be patched. I've also found more efficient coding alternatives. Numerical recipes resorts to some floating point calculations in one of the sorting routines that I found a simple integer alternative for (at least their floating point stuff wasn't inside a loop). I've also got some experience with their Simplex implementation (AMOEBA), and discovered it could get trapped inside the routine and fail to converge. For example, if the chi-square hypersurface is sufficiently complex, then when the simplex is shrunk, it's possible that one of the vertices will find itself at a *higher* chi-square value than it was before the simplex was shrunk! Code that locks up in an internal loop is unacceptable.

And:

(4 June 1997)
The Numerical Recipes FFT is an good example of a routine that has clearly been designed for maximum simplicity and clarity at the expense of suitability for practical work. Not only is it limited to powers of two (which is especially unfortunate in the case of multidimensional transforms), but it is also very slow. A comparison of many FFT implementations shows that the NR code is much worse than other available software. This would be fine if NR made it clear that their code is a pedagogical demonstration only, but it does not--no mention is given (in my edition of NR) of the better FFT algorithms and implementation strategies that exist.

And:

(13 January 1999)
I sent this report to bugs@nr.com on July 14, 1998, which they didn't acknowledge. The "benchfft" web page that I refered to in my e-mail no longer exists.

Text of message follows:

This is a paragraph of a letter I sent to the maintainers of the fftw web site, discussing the accuracy results at http://theory.lcs.mit.edu/~benchfft/results/accuracy.html

The bottom line is that the Numerical Recipes code for FFTs is not as accurate as the best codes available, and four1 and realft (or drealft) may not be suitable for use as the basis for a fast bignum arithmetic package, which is, by coincidence, an example given in Section 20.6 of NR.

[Quoting another usenet posting:] You mention that Mayer (simple), NAPACK, Nielson, and Singleton "use unstable iterative generators for trigonometric functions". I do not dispute this, but unstable is in the eye of the beholder. The Numerical Recipes C code uses a generator for an FFT of $N$ points that has error of size $\sqrt{N}$ units in the last place at the end of the array, i.e., it loses $\log_2\sqrt{N}$ bits in the calculation of the trigonometric function, and hence in the FFT. A well-designed FFT will have a generator that loses only $\log_2\sqrt{\log_2 N}$ bits in the last place in the generator. This small difference, which is noticeable when calculating the large FFTs needed for bignum arithmetic, may make the Numerical Recipes routine unusable for this application, especially because the real-to-complex wrapper realft, uses the same generator, and doubles the size of the problem (on input, and on output). (I have the accuracy output of bench for sizes up to 4 mega-whatevers, if you'd like to look at it---the average relative error for nrc_four1 is 100 times as large as the average relative error for the best routines with 4 million points.)

And:

(9 March 1999)
Thank you for your informative page about possible pitfalls with Numerical Recipes. I've just had a similar experience. After reading your page, I decided nevertheless to be lazy and copy one of the NR routines for sorting, thinking to myself "it's insertion sort, how could they get it wrong? I could code it in 15 minutes, or copy it from Numerical Recipes in 5." Using the online source of the second edition, I promptly went to the Insertion Sort (pg.330), and copied it into my editor, fired up gcc, and got strange output. As it turns out, there's a typo, and the comparison in the main for loop should be "j < n", not "j <= n", else you read off the end of the array.

What's bothersome is that this is 10 lines of code. You'd think they could check it -- and this is the 2nd edition! And a sorting routine to boot, about as simple as they get. I guess one always pays for laziness. I'll be writing my own quicksort.

Another correspondent reports (6 May 1999), concerning this remark:
Since this particular post actually gave a specific line that was in error for the second edition, I opened my book to take a look. I believe the poster did not realize that NR specified a unit offset array pointer. Thus, j <=n does NOT run off the end of the array.

This criticism was offered on (2 Sept 1999):

I have just recently seen your web page Re NR. I was thoroughly disappointed with the approach you have taken both in terms of completeness and fairness. In this respect I urge you to consider:

1) Many of the errors/bugs/complaints you have listed are dated 1991. This is 1999 nearly 2000, are you sure that these complaints are relevant? Does your web deserve updating?

2) You do not seem to have posted too many letters indicated user's with "good" experiences. Have you made an effort to collect any such? Do you consider the NR response a sufficient case for this?

3) You do not seem to have posted too many letters indicating short comings of commercial (and very expensive) packages. For example IMSL has had and still does have a large number of shortcomings. Considering that IMSL is 200 TIMES MORE EXPENSIVE (and that's just for a PC, the ratio for CRAY etc is considerably greater), uses dated formats/methods, does not have very good manuals etc., some justifiable issues can be raised in favour of NR on at least a "value basis".

4) Some of the complaints arise from users who by their own admission do not know too much. Are they expecting to become math experts simply by the purchase of a very inexpensive software package? Do they believe that buying an expensive package would have prevented them from "blowing themselves up"? I expect that letters of that type do not belong on your web site when considering the merits of such a package, and server only to distort the matter.

5) In relation to 4) above would you considering indicating on your site that even some simple numerical tasks require a certain basic mathematical knowledge and these as well as the more complex ones are not for the uninitiated: No more than a book on medicine would prepare someone to become a surgeon.

6) If you are going to "poo-poo" such a package, would you considering doing a little work to determine a "benchmark", and indicate which packages are "good" (and possible why)? For example, the public domain libs such as LAPACK, etc etc are in many ways much much better than commercial packages, but this type of info is also missing from your web site. Or at the very least if you are going to "poo-poo" one package you should "poo-poo" all packages (as certainly there is no such thing as a perfect package).

There are various other lesser points, but i hope that this collection provides that direction of my thoughts.

For the record, i obtained my PhD in applied math as pertaining to numerical methods for PDEs, i have been using and developing numerical methods for over 20 years on a wide variety of machines and libs including NAG, IMSL MINPACK, LINPACK, LAPACK, NR etc, as well as a considerable number of my own projects.

And:

(28 September 1999)
* Complex Hermitian matrix diagonalisation:

I ran a c program which generated a complex matrix and then called to the diagonalization subroutines from Numerical Recipes and to mathc90. The catch in Numerical Recipes is that the diagonalisation method required for a complex matrix A + iB is to convert it to the real matrix:

    A  B
   -B  A
which is twice the size of the original complex matrix. It is this real matrix that gets diagonalised. Even so, the results seem to have a lower accuracy than the ones obtained from mathc90. The example given in the user guide to mathc90 was less accurately solved by Numerical Recipes.

seconds required for solving matrices of:

dimension:        Numerical Recipes           mathc90
---------         -----------------           -------
                  MacG3  | Cray-J90          G3  | Cray-J90
 90 x 90             1 s |     7 s           1 s |     1 s
180 x 180           17 s |    57 s           3 s |     7 s
270 x 270          132 s |   191 s          12 s |    22 s
360 x 360          500 s |   432 s          43 s |    53 s

                   time with NR / time with mathc90
                             6-11 in the G3
                             7-9  in the Cray
The program was exactly the same in both machines and compiled with the optimization option -O3. It was run interactively in the Cray under conditions of normal use (i.e. several users logged in and with batch jobs running in the background).

[Editor's note: The software in the package mathc90 is based on EISPACK. See Netlib.]


I don't know if the senders want to be publicly identified. If you're interested in contacting them, send me e-mail, and I'll ask them to contact you.


Our experience, and that of many others, is that it is best to get numerical software from reliable sources. The easiest and cheapest is Netlib, which includes the collected algorithms from ACM Transactions on Mathematical Software (which have all been refereed), and a great many other algorithms that have withstood the scrutiny of the peers of the authors, but in ways different from the formal journal refereeing process. The editor of this page has collected links to several other sources.


Compiled by W. Van Snyder. I've removed my e-mail address to avoid spam, but you might try
nospam AT math DOT jpl DOT nasa DOT gov, after changing nospam to vsnyder

And for a response to some of the above, see the NR response.