- “To understand recursion, you must first understand recursion.”
During the months I’ve spent writing this book, I can assure you that this joke gets funnier the more you hear it.
I love to read books like this rather than the AI generated slush that seems to be so abundant today. This author has done a great job here. Although I have to say, it is sad that I feel like I have to ‘audit’ everything to see if it’s written by a robot before I invest too much time in it /:
rednafi 5 days ago [-]
True for blogs as well. The saddest thing is, a few influential web folks hqve switched gears to AI evangelism and started spamming people with low-quality AI-generated slop. Kinda broke the whole circle of trust. Now, if I get the slightest whiff of gippity in a text, I just bail out.
MonkeyClub 1 days ago [-]
> gippity
This is more my new favorite technical term, I strongly second wider adoption!
deveesh_shetty 5 days ago [-]
I remember reading "Automate boring stuffs with Python" it helped me a lot while I was in my early college years.
I glanced through the initial chapter of this book, and it is so well written even for any newbie to understand.
Personally I have had pretty hard time understanding recursion and all the intricacies of it, and still sometimes can't wrap my head around it.
Would love to read this book, I have filled the fotm for a review to read the free ebook, hopefully i receive a copy!
fire_lake 4 days ago [-]
I don’t understand this book.
It begins by explaining how great recursion is.
Then later it explains why you shouldn’t use it for large inputs in the two chosen languages (JavaScript and Python) to avoid blowing up the stack.
Then it argues that tail call optimisation is bad!
So when is a good time to use recursion?!?
And what is the purpose of the book?
dahart 4 days ago [-]
Recursion is great for CS education, but recursion should be rarely/never used in production code. Tail call recursion is one exception to that - I see it in production code occasionally. But tail call recursion is limited to cases of 1-dimensional recursion that can be transformed into iteration - and iteration is better anyway, so even tail calls need some extra justification for use in production code.
When I was a game programmer, recursion was to be avoided at all costs.
Even as a past graphics researcher, I’ve used recursive flood-fill as part of several research papers, and recursive flood-fill on a large image will blow your stack, so I always translate it into a heap allocation and implement recursion using an iterative algorithm with breadcrumbs for backtracking. It’s common for there to be a way to increase the stack size, but it’s just really annoying and sometimes not possible to always be able to do that on any machine you want to run on.
One case where recursion or something that looks like recursion is used in production code is when dealing with tree data structures. Those are common, and traversing them is common. Recursion is acceptable for that - though people who care about performance will still optimize it into an iterative algorithm whenever possible (which is most of the time).
YeGoblynQueenne 4 days ago [-]
>> Recursion is great for CS education, but recursion should be rarely/never used in production code.
How so?
Do you mean "in Python" or similar languages, or do you mean in general? I write all my code in Prolog basically and there it's all recursion, no iteration. There's no iteration constructs. The Prolog interpreter has tail-call optimisation and you learn to use it soon enough.
Anyway let's not be all dogmatic like that. Rules always have exceptions (including this one ha ha) so there's no point in being so absolute about things.
Edit: Rule no 1 from NASA is "avoid complex control flow". From your link:
1. Avoid complex flow constructs, such as goto and recursion.
Seems that even NASA accepts a modicum of GOTOs and recursion.
dahart 4 days ago [-]
Correct NASA’s first rule isn’t absolute, it says “avoid”, but they do have a hard rule that loop (recursion) bounds must have fixed bounds, which still rules out most non-trivial recursion.
> How so?
Stack + function call recursion is almost always slower and more memory intensive than alternatives, not to mention risk of stack overflow, so most people who care about perf and mem will opt out anyway. Tail recursion isn’t included, because it optimizes out the function call, and tail recursion is iteration anyway.
If tail recursion isn’t an option, it means a tree or multi-branch recursion, which is generally speaking difficult to reason about and difficult to bound, so error prone. People who care about safety, like NASA, avoid recursion for those reasons. I’m sure they will use recursion when it’s the only choice, or the best choice. Choice of data structure might affect whether recursion is best.
> I write all my code in Prolog
Interesting! For what industry? I‘ll admit I’ve never met any production Prolog, but of course the recursion rule doesn’t apply to a language that has no other choice. Wikipedia kinda bluntly suggests that Prolog is best for research and education, and rarely used in production code…?
YeGoblynQueenne 3 days ago [-]
What is "non-trivial" recursion? Honest question, I'm not familiar with the term. E.g. is reversing a list recursively "non-trivial"?
I think the 2nd rule ("All loops must have fixed bounds. This prevents runaway code.") refers to numerical bounds, so if you take it literally it would also eliminate e.g. while loops with a boolean condition, but I don't know if that's the intention. In NASA, with very expensive equipment on the line, maybe; but for most other environments that would be an onerous requirement.
In any case you can turn any recursion with non-numeric bounds to one with numeric bounds easily. You do that by adding a parameter that counts the recursion steps and exits if the count exceeds a limit. For added security, you can do this check in each "clause" (literal or figurative) of the recursive procedure, not just the boundary condition. Then you have depth-limited recursion and bobs your ankle. Or Bob is your uncle, I'm never sure about that one.
>> Interesting! For what industry? I‘ll admit I’ve never met any production Prolog, but of course the recursion rule doesn’t apply to a language that has no other choice. Wikipedia kinda bluntly suggests that Prolog is best for research and education, and rarely used in production code…?
Most of my Prolog coding has been for my PhD and post-doc research of course but I'm now in my third industry contract where I work almost exclusively with Prolog (the rest is a bit of shell scripting and the like). I don't want to say what the industry is, or the job, but as a for instance my two previous Prolog industry gigs were in an AI startup and the other working on a legacy system for management of rental car fleets, both in France two years ago. The car system was written in the '90s and it had changed many hands since then so it was a royal mess and I didn't stay long working on it but I recently spoke with one of the other devs and it's still going.
There's plenty of that legacy Prolog expert system code around, all left over from the '90s, written before the winter. Sicstus, the biggest commercial Prolog vendor, used to brag on their landing page that if you've booked any air travel recently then chances are you used a Prolog-based system, which shouldn't come as a surprise. After all, every time you use online banking, you are using a system based on COBOL and running on a mainframe. That claim has recently disappeared from the Sicstus site though and I don't know what that means. Anyway Prolog coders occasionally hear about gigs working on stuff like that through the grapevine, but judging from my experience those are mainly maintained by programmers who have never seen Prolog before in their lives, until they're thrown in at the deep end an have to make do.
But I see you're making a distinction between "research and education" and "production code"? Why's that? The code I write for my research is production code. I can tell you that with a straight face because I've worked in the industry many years before I did my PhD and my Prolog code is much more stable, and maintainable to boot, than 90% of the rickety, held together with bits of string and gaffer tape, "production" code I worked with as an industry code monkey.
And btw I'm still a code monkey. That's what makes code "production": who writes it, not who it is for.
Btw my post-doc Prolog code had to run on a robot that cost a few thousand quid (I'm in the UK) so it had to be industrial strength. I never got to test it because my funding er expired. In academia, you have to be careful who you're telling hard, technical truths to. So now I'm working in the industry.
dahart 3 days ago [-]
If a function calls itself exactly once, it’s a trivial recursion. Same is true if it calls itself exactly four times. If the entire problem can be reasonably inlined or “unrolled”, it’s trivial. Having to use fixed bounds and a small amount of memory might mean that all recursion is trivial. Reversing a list is a perfect example of something that is best solved without recursion. Do you really want to use as many stack frames as you have list entries? No, not really. Can you tell me off the top of your head what the limit on max list size is without looking it up for a list-reversing recursive algorithm?
Yes NASA’s rule is intended to prevent loops with only boolean conditions. Everyone (i.e. people who did not write the code) should be able to know for certain by glancing at a loop whether it will finish, and fixed bounds gets you most of the way there. It is fine if you put a fixed call count on the recursion, assuming that recursion is actually the best choice. It’s just that actual recursion is almost never the actual best choice. Having a fixed call count and a fixed recursion depth nearly always suggest that the problem can be easily translated to iteration, and iteration is cheaper. Even something as simple as using a simulated stack structure to replace function calls is usually preferable to burning memory with stack frames and wasting registers and cycles passing parameters around. People who do high performance work or just care about efficiency don’t question these things.
NASA’s rules aren’t weird when it comes to other organizations that do safety-critical code. Even video games, like I mentioned used to do nearly everything on NASA’s list. Embedded code on small devices still does all of this; car manufacturers follow the same rules. When memory is very small or limited, and when crashes have high consequence and are unrecoverable, the situation is completely different than a grad student sitting at a terminal.
Their rules aren’t very onerous either, most of the time, they just require changing habits and thinking differently. Once you get used to them, just like with any coding practices, there are simple patterns you can follow to implement just about anything.
One exercise that might(?) be fun for you is take some of your research code and port it to the smallest Arduino you can buy, and get it to run. Even if Prolog is an option on that platform, you might find that standard recursion is not.
“Production code” is a fairly common/standard term (Google it and browse multiple answers) that does mean in part who it’s for. The term is meant for situations where code is usually written by teams, most often large teams, and where the code is written in service of a product, and will be run by people other than the authors of the code. Production code is not a comment on the quality of the code nor of the choice of language, but it does mean the code will be tested by a QA department and that it will be run under uncontrollable conditions by people who are not experts or coders.
If you write code for your PhD and you run it yourself, it’s not production code, period, no matter how stable or high quality you believe it to be. For sure there are cases where research code and production code overlap. They are relatively rare, but you might be in that part of the Venn diagram.
YeGoblynQueenne 3 days ago [-]
I think your analysis about efficiency is specific to one kind of implementation of recursion that is not the only possible one and certainly not the most efficient one, or how it's done in Prolog. But talk is cheap so I offer the following as a proof of concept.
This is my factorial predicate in Prolog, tail-recursive style:
%! factorial(+N,-Factorial) is det.
%
% Calculate the Factorial of N
%
factorial(0,1):-
!.
factorial(N, F):-
factorial(N, 1, F).
%! factorial(+N,+F,-G) is det.
%
% Business end of factorial/2.
%
factorial(1,F,F):-
!.
factorial(N,F,G):-
F_ is N * F
,N_ is N - 1
,factorial(N_,F_,G).
And this is its output running on SWI-Prolog and printing out statistics about its execution:
?- time( factorial(100_000, _N) ), format('~e~n',[_N]).
% 199,999 inferences, 0.672 CPU in 1.184 seconds (57% CPU, 297673 Lips)
2.824229e+456573
true.
?- statistics.
% Started at Sun Jan 05 16:47:25 2025
% 0.672 seconds cpu time for 546,365 inferences
% 6,574 atoms, 4,766 functors, 3,441 predicates, 51 modules, 134,446 VM-codes
%
% Limit Allocated In use
% Local stack: - 20 Kb 2,432 b
% Global stack: - 508 Kb 380 Kb
% Trail stack: - 30 Kb 384 b
% Total: 1,024 Mb 558 Kb 383 Kb
%
% 22,105 garbage collections gained 9,936,079,752 bytes in 0.063 seconds.
% 4 atom garbage collections gained 1,573 atoms in 0.000 seconds.
% 7 clause garbage collections gained 156 clauses in 0.000 seconds.
% Stack shifts: 2 local, 7 global, 7 trail in 0.000 seconds
% 2 threads, 0 finished threads used 0.000 seconds
true.
That will run on the smallest Arduino you can find, or at least the smallest embedded system you'll likely find that runs an OS.
Note that the above is measuring all utilisation since I started up SWI-Prolog, not just to run my code (e.g. 3,441 predicates is all the SWI-Prolog libraries).
If I try to do the same calculation on the Windows 11 Calculator (switch to "Scientific" version), I get the following output:
Overflow
That's on my 64 GB RAM laptop. I'm saying this to show that 100k! is not a small calculation, but it can be done efficiently with tail-call optimisation.
>> People who do high performance work or just care about efficiency don’t question these things.
I'll try not to be hurt by your words :P
dahart 3 days ago [-]
I think you’re misunderstanding my points. I already said (more than once) tail recursion doesn’t count - it is implemented as iteration, and factorial is trivial to do without recursion, it just doesn’t help you make a good case. The 10GB of garbage collection doesn’t help either, and implies more time spent in memory allocation and cache churn than it’s reporting.
BTW, I am including (and NASA is talking about) embedded systems without an OS. Most of my game programming career involved platforms with no OS.
YeGoblynQueenne 3 days ago [-]
Yes, I guess I misunderstand your point. When did you say that tail recursion doesn't count? And why wouldn't it? That's how you implement recursion efficiently. That's the whole point of the conversation so how come it doesn't count?
And when did we switch from "smallest Arduino" to "system with no OS"? I'm pretty sure that if you keep looking for the smallest platform on which you can run something, anything, you'll find a point you can make and prove, but that's kind of meh. You're not going to run a SAT solver, a planner or an image classifier on an Arduino with 512 KB RAM. You're not going to run Doom on a bank card either. Not because of recursion, because of everything else.
And if you can't transform recursion into iteration, that's a procedure you can't run without recursion anyway.
dahart 3 days ago [-]
Tail recursion is iteration, it’s not true recursion. I said it in my first and second and fourth comments in this thread. If you didn’t notice it or you think that tail recursion is the point of this thread, it’s because you’re misunderstanding me. I’m saying that my comments about inferior performance and memory usage do not apply to tail recursion, because tail recursion optimizes out the function call and stack usage. However, tail recursion cannot always be used, so when we’re talking about recursion proper and recursion in general, it’s fair to assume we’re talking about stack frames and function calls. NASA might be okay with a fixed-bound tail recursion, but not other kinds of recursion.
In my mind, I was assuming and including no-OS scenarios from the start, you just assumed an OS because you’re using threads and garbage collection and memory allocation. NASA’s computers on the space shuttle and rovers haven’t historically run OSes, and I assumed you might have known that. Little computers like ECUs in cars don’t run OSes either. Console video games before about 15 years ago also rarely had an OS. The Nintendo Wii didn’t have one, nor any Nintendo console before that. We never switched, I’m clarifying now because you specified an Arduino with an OS. I was talking about safety-critical, embedded, and small memory systems from the start. This point is very relevant because it’s one of the reasons that recursion is to be avoided. Doom does run on very tiny computers without an OS, like a calculator. I don’t think Doom uses any recursion or heap allocation (except maybe to initialize) or garbage collection.
I’ve never seen a recursive algorithm that can’t be transformed into iteration, or where the recursion can’t be emulated with a faster iterative mechanism and smaller memory footprint than function calls. I doubt that one exists. Tail recursion is an example of trivial recursion automatically transforming function calls into iteration.
YeGoblynQueenne 2 days ago [-]
>> I’ve never seen a recursive algorithm that can’t be transformed into iteration, or where the recursion can’t be emulated with a faster iterative mechanism and smaller memory footprint than function calls.
Primitive recursive functions are the ones that can be "unrolled" in iterative loops. There are functions that are recursive, but not primitive recursive, like the Ackermann function (https://en.wikipedia.org/wiki/Ackermann_function). Those cannot be replaced by iteration.
I think you're putting far too much trust into "I haven't seen" and not considering the fact that there is more to see.
dahart 2 days ago [-]
> I think you're putting far too much trust into "I haven't seen" and not considering the fact that there is more to see.
No, I was being cheeky. The fact that recursion can always be transformed into iteration has been proven. There is no such thing as a “procedure that can’t run without recursion”, which I assumed you already knew, and you might want to keep in mind if you want to keep defending recursion.
So do you have any real-world examples where recursion is preferred? This sub-thread started because you were taking umbrage with my claim that recursion is for education and not often used in production code. So far your examples are proving my point, 100% of your examples are classroom problems and not things people need in practice. List reversing might be the most common, and it’s very rare for someone to need to implement it (languages and libraries usually provide it), and far rarer still to need to implement it recursively in practice. Factorial of large numbers and Ackermann function are purely academic exercises.
YeGoblynQueenne 2 days ago [-]
>> There is no such thing as a “procedure that can’t run without recursion”, which I assumed you already knew, and you might want to keep in mind if you want to keep defending recursion.
There is such a thing. Non-primitive recursive functions can't be converted to iteration.
I'm sorry but where did I give examples that are "100% classroom problems"?
I gave examples of production code that use Prolog, therefore recursion since Prolog has no iteration constructs: the three jobs in the industry I've held, and the airline software from Sicstus. I can also add IBM's Watson (it used Prolog), Datalog databases (recursion is the point), and the SWI-Prolog website which is running on an http server implemented in Prolog (they call that "eating their own dogfood"). Those are all things that people need in practice and not "classroom problems". There's plenty more Prolog code running in the wild but obviously I don't know every program anyone has ever written using Prolog.
There's also code in Answer Set Programming (e.g. I'm aware of planners for robotics applications), Haskell (e.g. xmonad, a windows manager in Haskell) and plenty of code people write in Scheme, Scala, etc Lisps, that also don't have iteration constructs. There's plenty of recursion in "real world" and "production" situations. But I get the feeling you don't work with it, so you just don't believe it exists. That's not a great way to reason about things.
dahart 2 days ago [-]
> Non-primitive recursive functions can’t be converted to iteration.
That’s not true, I think you’re misunderstanding the article you linked to. Primitive recursive functions require fixed bounds, but that is not the same thing as all iteration. The Church-Turing thesis proves that anything you call recursion can be iterated.
> I'm sorry but where did I give examples that are "100% classroom problems"?
I answered that question in the comment you replied to.
> But I get the feeling you don't work with it, so you just don't believe it exists. That's not a great way to reason about things.
Well, to that I offer that writing incorrect assumptions and straw-man responses isn’t a great way to communicate. I actually see recursion every day, and I still stick by what I said. I even work on a team that builds recursive hardware that accelerates traversing recursive data structures. To me it really seems like you’re still not understanding my point about call stacks vs iteration. I’m sorry my posts are making you feel defensive. Maybe you missed the part where I mentioned that if you work in a language like Prolog that offers no choice, you don’t need to worry about it? The time to worry about it is when there’s a choice, like in C++ or JavaScript or Python. Keep using tail recursion, that’s a good way to avoid some of the problems with recursion!
YeGoblynQueenne 1 days ago [-]
I'm not being defensive, but this conversation is getting nowhere and that's becoming a little frustrating. Maybe you don't see it but your comments in this thread are all along the lines of "don't use recursion"... until they're suddenly not. You wrote (https://news.ycombinator.com/item?id=42595354):
>> Recursion is great for CS education, but recursion should be rarely/never used in production code. Tail call recursion is one exception to that - I see it in production code occasionally. But tail call recursion is limited to cases of 1-dimensional recursion that can be transformed into iteration - and iteration is better anyway, so even tail calls need some extra justification for use in production code.
"... recursion should be rarely/never used in production code" and "even tail calls need some extra justification for use in production code". Then you listed the JPL coding guidelines that say "avoid recursion". And now you're saying that what you really meant was that yes, you should use recursion, and tail recursion, but that doesn't count because it can be replaced by iteration anyway? Then why not avoid the replacement service and have all your loops be iterative in the first place? Or is that your argument? I really can't tell. And I'm getting the feeling that you don't know exactly what you're arguing for yourself, especially if you say (big reveal) you are working with recursion everyday. What's this all about now?
>> That’s not true, I think you’re misunderstanding the article you linked to. Primitive recursive functions require fixed bounds, but that is not the same thing as all iteration. The Church-Turing thesis proves that anything you call recursion can be iterated.
Right, so now we're talking about unbounded iteration? Again that's not the impression I got from the discussion upthread. Is that the good, safe, NASA-compliant kind of coding practices you support? I don't think it is.
>> Keep using tail recursion, that’s a good way to avoid some of the problems with recursion!
Now you're just being patronising. I was hoping we could have a robust disagreement without taking the piss, but I guess this one too has gone the way of all the conversations on the internet. Let's end this hear then, I'll let you have the last word (since I wrote a bit already myself).
And no hard feelings from me. Communication is difficult.
Btw, this is interesting:
>> I even work on a team that builds recursive hardware that accelerates traversing recursive data structures.
If you want to say a couple of words about what that is, I'd be curious to hear.
dahart 1 days ago [-]
> Now you’re just being patronising.
Jesus Christ, I was just trying to be nice. Your defensiveness is repeatedly leading to misunderstanding and getting in the way of us having a productive conversation.
> Right, so now we’re talking about unbounded iteration?
You’re conflating multiple different topics here. NASA doesn’t allow unbounded iteration. That’s a separate topic from whether recursion can theoretically be transformed into iteration.
Recursion can always be transformed into iteration. That statement is not and never was dependent on what NASA does. There is no such thing as a recursive function that can’t be iterated. In practice, iterating is as efficient or more efficient than using a call stack. This is still and always was my point. If you’re using a call stack for recursion, you are wasting memory and cycles, which is one of the reasons “Software developed in Prolog has been criticised for having a high performance penalty compared to conventional programming languages.” https://en.wikipedia.org/wiki/Prolog#Limits
I work on ray tracing hardware, which is conceptually recursive, uses recursive tree data structures, and often implemented using recursion. We make it faster by transforming the recursion into specialized iterative hardware, and also algorithms that do iteration in place of recursion. Computer graphics is full of algorithms that are conceptually recursive and yet always more efficient in practice if you avoid using call recursion and implement them iteratively instead. Character rig animation is one example. And browsers do this for rendering HTML.
mrkeen 4 days ago [-]
> Check out the famous NASA coding rules. Rule number one is no recursion.
And rule number 3 is no heap allocation.
dahart 4 days ago [-]
Yep. I don’t use NASA’s rules though, it was just an example. Like most of graphics research, flood fill is an algorithm you won’t ever see in a safety critical environment like the space shuttle. And for better or worse, heap allocation is used in production code several orders of magnitude more often than recursion. Heap allocation was to be avoided when I was first working in games, but these days people use it all the time.
mrkeen 4 days ago [-]
> Then it argues that tail call optimisation is bad!
I didn't think I'd find this, so I went digging. Sure enough:
The disadvantage of tail recursion is that it requires rearranging your recursive function so that the last action is returning the recursive call’s return value. This can make our recursive code even more unreadable.
Tail recursive functions require rearranging their code to make them suitable for the tail call optimization feature of the compiler or interpreter.
Personally, I take the stance that the tail recursion technique should never be used.
What can one do, but just disagree I guess?
> And what is the purpose of the book?
To straw-man functional programming with a one-two punch of "I already have that" and "it's bad" so devs never leave Python/JS land to find this stuff out for themselves ;)
tpoacher 2 days ago [-]
Yep. Especially when a tail recursion can be immediately converted to an equivalent loop by definition (and vice versa).
A bizzare comment at best, from someone writing a book on recursion.
voxl 5 days ago [-]
Claiming recursion is taught badly and then trying to teach recursion immediately via call stacks is certainly a choice. The guts of how your favorite compiler implements recursion is hardly what I would call "the correct way" to teach the concept.
It's not shocking that students snooze their way through discrete math, try their hardest to forget induction proofs, and the arrive in algorithms or some other higher level class and try their hardest to forget recursion/dynamic programming.
The reality is the concepts are hard and demand practice to obtain familiarity. There is no quick path to mathematical maturity. If you try your hardest to phone in induction then surprise surprise recursion is nonsensical to you.
theaeolist 4 days ago [-]
Recursion is natural and easy to understand when the argument of the function is a recursive data structure, and the cases are patterns on the constructors. These are natural and useful cases, much better than the alternative. Starting with misguided examples like factorial is demotivating.
saghm 4 days ago [-]
You're not wrong, but I think there's still something missing in terms of how to get from there to a place where people can see how (and more importantly, how to recognize _when_) to apply the concepts towards what they work on after they've learned the basics. In my first semester in college, I took a course that was half in OCaml and half in Java. During the first half, we used recursion extensively, modeled everything via closures rather than objects, and generally learned to think about things functionally. Then when we transitioned to Java, where we didn't apply any of that and instead used objects and for loops and imperative logic. That was the last class required for CS majors that used functional programming; after that, all of the requirements were either Java, C, or didn't use any programming language (but maybe used pseudocode, like our algorithms course). I personally took a number of other functional language courses as electives, like one where we learned Haskell, one about the theory of programming languages that used Coq, and a compilers course that used OCaml again, and while I wasn't the only one, it certainly wasn't a majority who went out of their way to take courses like this.
The first time I tried to implement something recursively in C++ at my job after college, my more senior teammates were concerned about whether we could safely assume that all of the different compilers would correctly optimize the tail recursion to avoid blowing up the stack, and they preferred rewriting the logic to be iterative rather than spend time trying to figure that out. This was something I was vaguely aware of as a real-world issue, but it hadn't occurred to my naive junior engineer mind that I couldn't just assume that prestigious "professional" compilers like GCC and Clang and MSVC wouldn't take care of this for me when I never had to worry about it in OCaml. After all, everyone in the world seemed to writing code in C++ with one of these compilers, and the OCaml toolchain didn't exactly have a glowing reputation in my mind when the only place I had heard of using the language didn't even use the default standard library!
I'm not saying that I think this is necessarily a good way to teach recursion generally, but I definitely think there's room for resources like this that help bridge the gap between understanding how recursion works in the best possible environment for it and showing how it works (warts and all) for specific real-world ecosystems.
plasticchris 4 days ago [-]
Call stacks help form a mental model for some people. It’s a way to help get an initial grip on the topic, one of many. Teaching usually involves many simplified models which in my experience helps a lot with initial understanding.
kazinator 4 days ago [-]
We don't need detailed call stacks to help with understanding recursion. Just the somewhat more abstract concept of a procedure/function activation. Calling a function suspends the caller and create a new activation for the callee. When the callee returns, it's activation is disposed and suspended color resumes in its original activation.
plasticchris 3 days ago [-]
The point was pedagogical, sometimes the “best” explanation just doesn’t work for a student. Some might get confused about where the state of the caller goes and get hung up.
3 days ago [-]
ccapndave 4 days ago [-]
Enjoying this so far! One initial comment is that I'm not sure the JS examples should all be embedded in an HTML page; why not code them to be run in node which means you can scrap the `script` tags and use `console.log` which is shorter removes the need for including line breaks.
MrVandemar 4 days ago [-]
> why not code them to be run in node
Because he considers his readers.
I can guarantee that a staggeringly high percentage of the readers have access to a web-browser capable of executing JavaScript (probably 100%). The percentage of the readers with Node installed will be necessarily lower.
dugmartin 4 days ago [-]
I think I’ve posted this here before but the best short explanation of recursion I’ve ever heard was from my programming languages professor 35 years ago when we were testing out algorithms in a toy Lisp we had to write at the start of the course. All non-infinite recursive algorithms should have a “base case and a smaller caller”, meaning there needs to be a terminal state and the algorithm should narrow its scope on each recursive call. I still use that to this day when I happen to need to write a recursive algorithm.
saghm 4 days ago [-]
This sounds a lot like what Coq enforces programmatically to avoid any programs that don't terminate from compiling (since there's no other looping construct in the language). If I remember correctly, languages like this are called "primitive recursive", and in some ways I think it's a bit unfortunate that we've converged on Turing complete languages as the only paradigm used for most real world things; I feel like there's untapped potential for making primitive recursive languages more ergonomic (like some sort of syntax for "simple" loops that can be syntactic sugar for having to manually define a recursive function).
YeGoblynQueenne 4 days ago [-]
"Primitive recursive" applies to functions, not languages, and -from a practical, programmer's point of view- it refers to recursive functions that can be "unrolled" in an iterative loop (or just a sequence of repeating steps).
Non-primitive recursive functions are functions that are recursive but can't be "unrolled", like the Ackermann function (which was specifically created as a demonstration that there exist such functions).
Recursion doesn't have to be complicated. It's the syntax of most programming languages that makes it complicated. For example in Prolog, a clause is recursive if it has a literal in the body that has the same symbol and number of arguments as in the head literal. This is easy to show with an example:
That's a recursive predicate. The first clause is not recursive- it's the terminating condition. The second clause is recursive. The second occurrence of "head(X,Y)" is the recursive call.
The only reason you can't do that in Python is because the syntax doesn't make it easy. In Prolog a clause is a set of literals and a program is a set of clauses so recursion is very simple and natural to express as a re-occurring literal like I explain above.
saghm 4 days ago [-]
You're right about me misremembering the terminology here, but I feel like in your focus on the details you've kind of glossed over the larger point that I was trying to make, which is that having a language where you _can't_ define something non-terminating (like you can in Turing complete languages) is still enough to do quite a lot, and I think it's a bit of a shame that there aren't that many mainstream options for this.
YeGoblynQueenne 3 days ago [-]
Many apologies, I didn't mean to be pedantic, only informative.
I did get the point you are making, but I wasn't sure what to say about it so I didn't say anything. Let me give it a go now. The problem with Turing completeness is that you don't know whether you'll need it until you do. There are non-Turing complete languages around of course, like e.g. HTML (far as I can tell). You can do quite a lot with HTML but, for example, you can't really define business logic.
In logic programming (Prolog is a logic programming language) there have been some efforts to eliminate the non-termination of recursion in Prolog while retaining the good parts of the logic-based and declarative paradigm. The result is languages like Datalog and Answer Set Programing (unfortunately abbreviated as ASP; long before ASP.Net though), which certainly have their place and certainly let you define complex behaviour, including recursive procedures. On the other hand, both languages don't let you use lists. You know, lists: [1,2,3]. Lists are recursive data structures and without the possibility of unbounded recursion- you can only have fixed-length lists. People in the ASP community know how to simulate lists by essentially breaking them up in single elements and having some code that looks for the next element, but that's a clunky, hacky, and incomplete notation that just goes to show what a mess one makes when one tries to make something better by making it worse. I'm afraid that's a pattern that applies to much more than logic programming and trying to make programming languages safe by making it impossible to write non-terminating code, will only do the same thing.
Non-termination is by no means something you get only with recursion of course. I like to say this story at length but I'll spare you the details: I was working in a C# shop and one bug that brought the company servers down (that was before the cloud) was caused by a foreach loop with a faulty condition. There was a talk about all that "upstairs". It was put to the suits that it is impossible to guarantee that an iterative loop will terminate, and the decision was made to... use recursion instead. In C#. Obviously that was completely ignored by everyone. That goes to show... something. But mainly that you can jolly well make a mess with unbounded iteration, as with unbounded recursion.
So in general I think what you say sounds good on paper but it won't work in practice. It's a bit like asking why can't we have a campfire that won't burn your marshamallows. Well because fire burns and that's the whole point. The undecidability of Turing-complete languages is not an academic curiosity. It's a direct result of the power of computers. You can remove the undecidability bu then you've taken away the power.
IIAOPSW 4 days ago [-]
This is fine as a beginner rule of thumb but it shouldn't be regarded as a universal truth about recursion. Its also possible for a simple evaluation without recursion to happen at infinity rather than at zero. In practice this usually means picking a large value of the input as a cut off point to apply an approximation or return a standard value instead.
For example, take an implementation of exp(x). The exponential function is defined as the sum to infinity of (x^n)/n!. This could be implemented recursively as exp(x,n) = 1+(x/n)exp(x,n+1). The challenge is to figure out the value (or criteria) for what value of n to cut this infinite evaluation short. Well, once n is larger than x all future terms will be multiplied by a factor less than 1. So pick some proportionality constant k such that if x is k times smaller than n (that is, x * k < n) then the remainder is presumed negligible and the function returns 1.
Another really nice example I know involves finding the resistance across a simple but infinitely repeating circuit. At very large n, there are so many resistors in the way that the contribution of the remaining circuit connected in parallel is basically nothing. So pick whatever value of net resistance R_N for an arbitrary cut off point N, then recursively find the net resistance in the circuit up to point N-1 connected in parallel with everything remaining in the circuit after point N.
There are other cases I can think of where the base case is actually the function converging (or becoming linear) for large rather than small inputs. And still other cases I know of where the function has more than one input argument, and thus it might make sense to increase the size in one input to reduce it in another etc.
YeGoblynQueenne 4 days ago [-]
>> All non-infinite recursive algorithms should have a “base case and a smaller caller”, meaning there needs to be a terminal state and the algorithm should narrow its scope on each recursive call.
Oh, well, I don't like that, sorry. Recursion is a syntactic property but what you explain here is an algorithmic one. Unbounded recursion, non-terminating recursion and left-recursion are still recursion. My intuition is that you should teach termination and recursion as different concepts and help students understand that they are not the same. So that they learn to avoid non-terminating recursion that is.
Btw I think what you describe is called a Knuth-Bendix ordering.
jeffrallen 4 days ago [-]
This!
Get your base case right and you're 1/n th of the way there. Get your recursive calls right and you've got the other (n-2)/n th right.
Then you only need to account for the off by one error and you're done.
:)
fifilura 4 days ago [-]
I would love an example of what you just wrote!
Jtsummers 4 days ago [-]
The simplest example is with factorial. The "base case" is when we hit 0 or 1, we want to return 1. By identifying and defining this case first you ensure that your program's execution will be bounded (which is normally what we want):
def factorial(n):
if n == 0 or n == 1:
return 1
# recursive case(s)
This is an incomplete program, the next part is to identify each of the recursive calls. In this case there's just one, and it's achieved by reducing `n` by 1 and multiplying the result of the call by the current `n`:
def factorial(n):
if n == 0 or n == 1:
return 1
return n * factorial(n - 1)
And if you're using a non-braindead interpreter/compiler you'd make this more efficient by using tail-calls (these get optimized away by reasonable language implementations, the main Python implementation is not reasonable):
def factorial(n, accum=1):
if n == 0 or n == 1:
return accum
return factorial(n - 1, accum * n)
For more complex cases, consider a binary tree or node in an abstract syntax tree (AST). Assuming an ordered binary tree (left node is less than current node, right is greater than the current node) and a search over the tree (this is also easy to do without recursion at all but for the illustration):
Assume the node is something like this:
@dataclass
class Node:
item: int = 0
left: Node = None
right: Node = None
def search(node, value):
# 1st base case: the value was not found
if node is None:
return False
# 2nd base case: the value was found
if node.item == value:
return True
Again, at this point the program is not complete but we've identified the two base cases. The next step is to figure out the recursive step(s). In this case there are two recursive steps but only one will be taken (if we aren't at the base cases yet): go left or go right.
def search(node, value):
if node is None:
return False
if node.item == value:
return True
if node.item < value:
return search(node.left, value)
if node.item > value:
# technically this test will always be true if we reach this point
# so the right search can be made unconditional. I put this here
# to be more explicit, remove the test to be more efficient.
return search(node.right, value)
If your data structure or problem has a recursive structure, you can use this approach.
fifilura 4 days ago [-]
Thank you! <3
analog31 5 days ago [-]
I first learned about recursion in math class, through the theory of induction. Unfortunately, I fear that induction will disappear from the math curriculum if it's not seen as a problem solving technique.
funcDropShadow 3 days ago [-]
Wow, how can you one come up with the impression that induction isn't a problem solving technique and be in a position to decide about math or cs curriculum? Induction is one of the most fundamental proof techniques in mathematics. And every program is isomorphic to a proof in some logic. I'll skip the details, lookup Curry-Howard isomorphism if interested. But that means if you develop a mathematical intuition of induction you also develop an intuition on programs, usually on complicated programs. That is enormously helpful in programming, unless you only want to program CRUD services and spreadsheets.
analog31 3 days ago [-]
Ah, you're preaching to the choir. But I'm the only person I know -- in a large technical organization -- who remembers doing proofs in math class and liking it. By the time my kids were in school, proofs had largely vanished from the math curriculum.
bko 3 days ago [-]
As an aside, why is buying from the publisher (the preferred method) almost always more expensive? In this case it's $40 vs $30.
Why list it on Amazon for less? Shouldn't the direct from publisher be lower since they don't have to pay Amazon? Plus Amazon throws in free one day shipping.
What's the economics behind this?
tpoacher 2 days ago [-]
Page 1: For more details, refer to page 1.
4 days ago [-]
treebeard901 4 days ago [-]
"It is exactly of the same nature as the Hindu's view, that the world rested upon an elephant and the elephant rested upon a tortoise; and when they said, 'How about the tortoise?' the Indian said, 'Suppose we change the subject.' "
retarga 4 days ago [-]
Yet another Python marketeer. I suggest Lisp or SML to understand recursion.
The author was vocal on Reddit defending the suspension of Tim Peters. I'm not sure if he ever has contributed anything substantial to Python itself. Implicitly defending slander and ostracism is vile, so avoid this one.
anArbitraryOne 4 days ago [-]
[flagged]
furyofantares 4 days ago [-]
I'm so sorry that happened to you. I hope you're okay.
anArbitraryOne 4 days ago [-]
;)
4 days ago [-]
Rendered at 20:35:01 GMT+0000 (UTC) with Wasmer Edge.
I love to read books like this rather than the AI generated slush that seems to be so abundant today. This author has done a great job here. Although I have to say, it is sad that I feel like I have to ‘audit’ everything to see if it’s written by a robot before I invest too much time in it /:
This is more my new favorite technical term, I strongly second wider adoption!
I glanced through the initial chapter of this book, and it is so well written even for any newbie to understand.
Personally I have had pretty hard time understanding recursion and all the intricacies of it, and still sometimes can't wrap my head around it.
Would love to read this book, I have filled the fotm for a review to read the free ebook, hopefully i receive a copy!
It begins by explaining how great recursion is.
Then later it explains why you shouldn’t use it for large inputs in the two chosen languages (JavaScript and Python) to avoid blowing up the stack.
Then it argues that tail call optimisation is bad!
So when is a good time to use recursion?!?
And what is the purpose of the book?
Check out the famous NASA coding rules. Rule number one is no recursion. :P https://en.wikipedia.org/wiki/The_Power_of_10:_Rules_for_Dev...
When I was a game programmer, recursion was to be avoided at all costs.
Even as a past graphics researcher, I’ve used recursive flood-fill as part of several research papers, and recursive flood-fill on a large image will blow your stack, so I always translate it into a heap allocation and implement recursion using an iterative algorithm with breadcrumbs for backtracking. It’s common for there to be a way to increase the stack size, but it’s just really annoying and sometimes not possible to always be able to do that on any machine you want to run on.
One case where recursion or something that looks like recursion is used in production code is when dealing with tree data structures. Those are common, and traversing them is common. Recursion is acceptable for that - though people who care about performance will still optimize it into an iterative algorithm whenever possible (which is most of the time).
How so?
Do you mean "in Python" or similar languages, or do you mean in general? I write all my code in Prolog basically and there it's all recursion, no iteration. There's no iteration constructs. The Prolog interpreter has tail-call optimisation and you learn to use it soon enough.
Anyway let's not be all dogmatic like that. Rules always have exceptions (including this one ha ha) so there's no point in being so absolute about things.
>> Check out the famous NASA coding rules. Rule number one is no recursion. :P https://en.wikipedia.org/wiki/The_Power_of_10:_Rules_for_Dev...
Edit: Rule no 1 from NASA is "avoid complex control flow". From your link:
Seems that even NASA accepts a modicum of GOTOs and recursion.> How so?
Stack + function call recursion is almost always slower and more memory intensive than alternatives, not to mention risk of stack overflow, so most people who care about perf and mem will opt out anyway. Tail recursion isn’t included, because it optimizes out the function call, and tail recursion is iteration anyway.
If tail recursion isn’t an option, it means a tree or multi-branch recursion, which is generally speaking difficult to reason about and difficult to bound, so error prone. People who care about safety, like NASA, avoid recursion for those reasons. I’m sure they will use recursion when it’s the only choice, or the best choice. Choice of data structure might affect whether recursion is best.
> I write all my code in Prolog
Interesting! For what industry? I‘ll admit I’ve never met any production Prolog, but of course the recursion rule doesn’t apply to a language that has no other choice. Wikipedia kinda bluntly suggests that Prolog is best for research and education, and rarely used in production code…?
I think the 2nd rule ("All loops must have fixed bounds. This prevents runaway code.") refers to numerical bounds, so if you take it literally it would also eliminate e.g. while loops with a boolean condition, but I don't know if that's the intention. In NASA, with very expensive equipment on the line, maybe; but for most other environments that would be an onerous requirement.
In any case you can turn any recursion with non-numeric bounds to one with numeric bounds easily. You do that by adding a parameter that counts the recursion steps and exits if the count exceeds a limit. For added security, you can do this check in each "clause" (literal or figurative) of the recursive procedure, not just the boundary condition. Then you have depth-limited recursion and bobs your ankle. Or Bob is your uncle, I'm never sure about that one.
>> Interesting! For what industry? I‘ll admit I’ve never met any production Prolog, but of course the recursion rule doesn’t apply to a language that has no other choice. Wikipedia kinda bluntly suggests that Prolog is best for research and education, and rarely used in production code…?
Most of my Prolog coding has been for my PhD and post-doc research of course but I'm now in my third industry contract where I work almost exclusively with Prolog (the rest is a bit of shell scripting and the like). I don't want to say what the industry is, or the job, but as a for instance my two previous Prolog industry gigs were in an AI startup and the other working on a legacy system for management of rental car fleets, both in France two years ago. The car system was written in the '90s and it had changed many hands since then so it was a royal mess and I didn't stay long working on it but I recently spoke with one of the other devs and it's still going.
There's plenty of that legacy Prolog expert system code around, all left over from the '90s, written before the winter. Sicstus, the biggest commercial Prolog vendor, used to brag on their landing page that if you've booked any air travel recently then chances are you used a Prolog-based system, which shouldn't come as a surprise. After all, every time you use online banking, you are using a system based on COBOL and running on a mainframe. That claim has recently disappeared from the Sicstus site though and I don't know what that means. Anyway Prolog coders occasionally hear about gigs working on stuff like that through the grapevine, but judging from my experience those are mainly maintained by programmers who have never seen Prolog before in their lives, until they're thrown in at the deep end an have to make do.
But I see you're making a distinction between "research and education" and "production code"? Why's that? The code I write for my research is production code. I can tell you that with a straight face because I've worked in the industry many years before I did my PhD and my Prolog code is much more stable, and maintainable to boot, than 90% of the rickety, held together with bits of string and gaffer tape, "production" code I worked with as an industry code monkey.
And btw I'm still a code monkey. That's what makes code "production": who writes it, not who it is for.
Btw my post-doc Prolog code had to run on a robot that cost a few thousand quid (I'm in the UK) so it had to be industrial strength. I never got to test it because my funding er expired. In academia, you have to be careful who you're telling hard, technical truths to. So now I'm working in the industry.
Yes NASA’s rule is intended to prevent loops with only boolean conditions. Everyone (i.e. people who did not write the code) should be able to know for certain by glancing at a loop whether it will finish, and fixed bounds gets you most of the way there. It is fine if you put a fixed call count on the recursion, assuming that recursion is actually the best choice. It’s just that actual recursion is almost never the actual best choice. Having a fixed call count and a fixed recursion depth nearly always suggest that the problem can be easily translated to iteration, and iteration is cheaper. Even something as simple as using a simulated stack structure to replace function calls is usually preferable to burning memory with stack frames and wasting registers and cycles passing parameters around. People who do high performance work or just care about efficiency don’t question these things.
NASA’s rules aren’t weird when it comes to other organizations that do safety-critical code. Even video games, like I mentioned used to do nearly everything on NASA’s list. Embedded code on small devices still does all of this; car manufacturers follow the same rules. When memory is very small or limited, and when crashes have high consequence and are unrecoverable, the situation is completely different than a grad student sitting at a terminal.
Their rules aren’t very onerous either, most of the time, they just require changing habits and thinking differently. Once you get used to them, just like with any coding practices, there are simple patterns you can follow to implement just about anything.
One exercise that might(?) be fun for you is take some of your research code and port it to the smallest Arduino you can buy, and get it to run. Even if Prolog is an option on that platform, you might find that standard recursion is not.
“Production code” is a fairly common/standard term (Google it and browse multiple answers) that does mean in part who it’s for. The term is meant for situations where code is usually written by teams, most often large teams, and where the code is written in service of a product, and will be run by people other than the authors of the code. Production code is not a comment on the quality of the code nor of the choice of language, but it does mean the code will be tested by a QA department and that it will be run under uncontrollable conditions by people who are not experts or coders.
If you write code for your PhD and you run it yourself, it’s not production code, period, no matter how stable or high quality you believe it to be. For sure there are cases where research code and production code overlap. They are relatively rare, but you might be in that part of the Venn diagram.
This is my factorial predicate in Prolog, tail-recursive style:
And this is its output running on SWI-Prolog and printing out statistics about its execution: That will run on the smallest Arduino you can find, or at least the smallest embedded system you'll likely find that runs an OS.Note that the above is measuring all utilisation since I started up SWI-Prolog, not just to run my code (e.g. 3,441 predicates is all the SWI-Prolog libraries).
If I try to do the same calculation on the Windows 11 Calculator (switch to "Scientific" version), I get the following output:
That's on my 64 GB RAM laptop. I'm saying this to show that 100k! is not a small calculation, but it can be done efficiently with tail-call optimisation.>> People who do high performance work or just care about efficiency don’t question these things.
I'll try not to be hurt by your words :P
BTW, I am including (and NASA is talking about) embedded systems without an OS. Most of my game programming career involved platforms with no OS.
And when did we switch from "smallest Arduino" to "system with no OS"? I'm pretty sure that if you keep looking for the smallest platform on which you can run something, anything, you'll find a point you can make and prove, but that's kind of meh. You're not going to run a SAT solver, a planner or an image classifier on an Arduino with 512 KB RAM. You're not going to run Doom on a bank card either. Not because of recursion, because of everything else.
And if you can't transform recursion into iteration, that's a procedure you can't run without recursion anyway.
In my mind, I was assuming and including no-OS scenarios from the start, you just assumed an OS because you’re using threads and garbage collection and memory allocation. NASA’s computers on the space shuttle and rovers haven’t historically run OSes, and I assumed you might have known that. Little computers like ECUs in cars don’t run OSes either. Console video games before about 15 years ago also rarely had an OS. The Nintendo Wii didn’t have one, nor any Nintendo console before that. We never switched, I’m clarifying now because you specified an Arduino with an OS. I was talking about safety-critical, embedded, and small memory systems from the start. This point is very relevant because it’s one of the reasons that recursion is to be avoided. Doom does run on very tiny computers without an OS, like a calculator. I don’t think Doom uses any recursion or heap allocation (except maybe to initialize) or garbage collection.
I’ve never seen a recursive algorithm that can’t be transformed into iteration, or where the recursion can’t be emulated with a faster iterative mechanism and smaller memory footprint than function calls. I doubt that one exists. Tail recursion is an example of trivial recursion automatically transforming function calls into iteration.
Primitive recursive functions are the ones that can be "unrolled" in iterative loops. There are functions that are recursive, but not primitive recursive, like the Ackermann function (https://en.wikipedia.org/wiki/Ackermann_function). Those cannot be replaced by iteration.
I think you're putting far too much trust into "I haven't seen" and not considering the fact that there is more to see.
No, I was being cheeky. The fact that recursion can always be transformed into iteration has been proven. There is no such thing as a “procedure that can’t run without recursion”, which I assumed you already knew, and you might want to keep in mind if you want to keep defending recursion.
So do you have any real-world examples where recursion is preferred? This sub-thread started because you were taking umbrage with my claim that recursion is for education and not often used in production code. So far your examples are proving my point, 100% of your examples are classroom problems and not things people need in practice. List reversing might be the most common, and it’s very rare for someone to need to implement it (languages and libraries usually provide it), and far rarer still to need to implement it recursively in practice. Factorial of large numbers and Ackermann function are purely academic exercises.
There is such a thing. Non-primitive recursive functions can't be converted to iteration.
I'm sorry but where did I give examples that are "100% classroom problems"?
I gave examples of production code that use Prolog, therefore recursion since Prolog has no iteration constructs: the three jobs in the industry I've held, and the airline software from Sicstus. I can also add IBM's Watson (it used Prolog), Datalog databases (recursion is the point), and the SWI-Prolog website which is running on an http server implemented in Prolog (they call that "eating their own dogfood"). Those are all things that people need in practice and not "classroom problems". There's plenty more Prolog code running in the wild but obviously I don't know every program anyone has ever written using Prolog.
There's also code in Answer Set Programming (e.g. I'm aware of planners for robotics applications), Haskell (e.g. xmonad, a windows manager in Haskell) and plenty of code people write in Scheme, Scala, etc Lisps, that also don't have iteration constructs. There's plenty of recursion in "real world" and "production" situations. But I get the feeling you don't work with it, so you just don't believe it exists. That's not a great way to reason about things.
That’s not true, I think you’re misunderstanding the article you linked to. Primitive recursive functions require fixed bounds, but that is not the same thing as all iteration. The Church-Turing thesis proves that anything you call recursion can be iterated.
> I'm sorry but where did I give examples that are "100% classroom problems"?
I answered that question in the comment you replied to.
> But I get the feeling you don't work with it, so you just don't believe it exists. That's not a great way to reason about things.
Well, to that I offer that writing incorrect assumptions and straw-man responses isn’t a great way to communicate. I actually see recursion every day, and I still stick by what I said. I even work on a team that builds recursive hardware that accelerates traversing recursive data structures. To me it really seems like you’re still not understanding my point about call stacks vs iteration. I’m sorry my posts are making you feel defensive. Maybe you missed the part where I mentioned that if you work in a language like Prolog that offers no choice, you don’t need to worry about it? The time to worry about it is when there’s a choice, like in C++ or JavaScript or Python. Keep using tail recursion, that’s a good way to avoid some of the problems with recursion!
>> Recursion is great for CS education, but recursion should be rarely/never used in production code. Tail call recursion is one exception to that - I see it in production code occasionally. But tail call recursion is limited to cases of 1-dimensional recursion that can be transformed into iteration - and iteration is better anyway, so even tail calls need some extra justification for use in production code.
"... recursion should be rarely/never used in production code" and "even tail calls need some extra justification for use in production code". Then you listed the JPL coding guidelines that say "avoid recursion". And now you're saying that what you really meant was that yes, you should use recursion, and tail recursion, but that doesn't count because it can be replaced by iteration anyway? Then why not avoid the replacement service and have all your loops be iterative in the first place? Or is that your argument? I really can't tell. And I'm getting the feeling that you don't know exactly what you're arguing for yourself, especially if you say (big reveal) you are working with recursion everyday. What's this all about now?
>> That’s not true, I think you’re misunderstanding the article you linked to. Primitive recursive functions require fixed bounds, but that is not the same thing as all iteration. The Church-Turing thesis proves that anything you call recursion can be iterated.
Right, so now we're talking about unbounded iteration? Again that's not the impression I got from the discussion upthread. Is that the good, safe, NASA-compliant kind of coding practices you support? I don't think it is.
>> Keep using tail recursion, that’s a good way to avoid some of the problems with recursion!
Now you're just being patronising. I was hoping we could have a robust disagreement without taking the piss, but I guess this one too has gone the way of all the conversations on the internet. Let's end this hear then, I'll let you have the last word (since I wrote a bit already myself).
And no hard feelings from me. Communication is difficult.
Btw, this is interesting:
>> I even work on a team that builds recursive hardware that accelerates traversing recursive data structures.
If you want to say a couple of words about what that is, I'd be curious to hear.
Jesus Christ, I was just trying to be nice. Your defensiveness is repeatedly leading to misunderstanding and getting in the way of us having a productive conversation.
> Right, so now we’re talking about unbounded iteration?
You’re conflating multiple different topics here. NASA doesn’t allow unbounded iteration. That’s a separate topic from whether recursion can theoretically be transformed into iteration.
Recursion can always be transformed into iteration. That statement is not and never was dependent on what NASA does. There is no such thing as a recursive function that can’t be iterated. In practice, iterating is as efficient or more efficient than using a call stack. This is still and always was my point. If you’re using a call stack for recursion, you are wasting memory and cycles, which is one of the reasons “Software developed in Prolog has been criticised for having a high performance penalty compared to conventional programming languages.” https://en.wikipedia.org/wiki/Prolog#Limits
I work on ray tracing hardware, which is conceptually recursive, uses recursive tree data structures, and often implemented using recursion. We make it faster by transforming the recursion into specialized iterative hardware, and also algorithms that do iteration in place of recursion. Computer graphics is full of algorithms that are conceptually recursive and yet always more efficient in practice if you avoid using call recursion and implement them iteratively instead. Character rig animation is one example. And browsers do this for rendering HTML.
And rule number 3 is no heap allocation.
I didn't think I'd find this, so I went digging. Sure enough:
What can one do, but just disagree I guess?> And what is the purpose of the book?
To straw-man functional programming with a one-two punch of "I already have that" and "it's bad" so devs never leave Python/JS land to find this stuff out for themselves ;)
A bizzare comment at best, from someone writing a book on recursion.
It's not shocking that students snooze their way through discrete math, try their hardest to forget induction proofs, and the arrive in algorithms or some other higher level class and try their hardest to forget recursion/dynamic programming.
The reality is the concepts are hard and demand practice to obtain familiarity. There is no quick path to mathematical maturity. If you try your hardest to phone in induction then surprise surprise recursion is nonsensical to you.
The first time I tried to implement something recursively in C++ at my job after college, my more senior teammates were concerned about whether we could safely assume that all of the different compilers would correctly optimize the tail recursion to avoid blowing up the stack, and they preferred rewriting the logic to be iterative rather than spend time trying to figure that out. This was something I was vaguely aware of as a real-world issue, but it hadn't occurred to my naive junior engineer mind that I couldn't just assume that prestigious "professional" compilers like GCC and Clang and MSVC wouldn't take care of this for me when I never had to worry about it in OCaml. After all, everyone in the world seemed to writing code in C++ with one of these compilers, and the OCaml toolchain didn't exactly have a glowing reputation in my mind when the only place I had heard of using the language didn't even use the default standard library!
I'm not saying that I think this is necessarily a good way to teach recursion generally, but I definitely think there's room for resources like this that help bridge the gap between understanding how recursion works in the best possible environment for it and showing how it works (warts and all) for specific real-world ecosystems.
Because he considers his readers.
I can guarantee that a staggeringly high percentage of the readers have access to a web-browser capable of executing JavaScript (probably 100%). The percentage of the readers with Node installed will be necessarily lower.
Non-primitive recursive functions are functions that are recursive but can't be "unrolled", like the Ackermann function (which was specifically created as a demonstration that there exist such functions).
Recursion doesn't have to be complicated. It's the syntax of most programming languages that makes it complicated. For example in Prolog, a clause is recursive if it has a literal in the body that has the same symbol and number of arguments as in the head literal. This is easy to show with an example:
That's a recursive predicate. The first clause is not recursive- it's the terminating condition. The second clause is recursive. The second occurrence of "head(X,Y)" is the recursive call.The only reason you can't do that in Python is because the syntax doesn't make it easy. In Prolog a clause is a set of literals and a program is a set of clauses so recursion is very simple and natural to express as a re-occurring literal like I explain above.
I did get the point you are making, but I wasn't sure what to say about it so I didn't say anything. Let me give it a go now. The problem with Turing completeness is that you don't know whether you'll need it until you do. There are non-Turing complete languages around of course, like e.g. HTML (far as I can tell). You can do quite a lot with HTML but, for example, you can't really define business logic.
In logic programming (Prolog is a logic programming language) there have been some efforts to eliminate the non-termination of recursion in Prolog while retaining the good parts of the logic-based and declarative paradigm. The result is languages like Datalog and Answer Set Programing (unfortunately abbreviated as ASP; long before ASP.Net though), which certainly have their place and certainly let you define complex behaviour, including recursive procedures. On the other hand, both languages don't let you use lists. You know, lists: [1,2,3]. Lists are recursive data structures and without the possibility of unbounded recursion- you can only have fixed-length lists. People in the ASP community know how to simulate lists by essentially breaking them up in single elements and having some code that looks for the next element, but that's a clunky, hacky, and incomplete notation that just goes to show what a mess one makes when one tries to make something better by making it worse. I'm afraid that's a pattern that applies to much more than logic programming and trying to make programming languages safe by making it impossible to write non-terminating code, will only do the same thing.
Non-termination is by no means something you get only with recursion of course. I like to say this story at length but I'll spare you the details: I was working in a C# shop and one bug that brought the company servers down (that was before the cloud) was caused by a foreach loop with a faulty condition. There was a talk about all that "upstairs". It was put to the suits that it is impossible to guarantee that an iterative loop will terminate, and the decision was made to... use recursion instead. In C#. Obviously that was completely ignored by everyone. That goes to show... something. But mainly that you can jolly well make a mess with unbounded iteration, as with unbounded recursion.
So in general I think what you say sounds good on paper but it won't work in practice. It's a bit like asking why can't we have a campfire that won't burn your marshamallows. Well because fire burns and that's the whole point. The undecidability of Turing-complete languages is not an academic curiosity. It's a direct result of the power of computers. You can remove the undecidability bu then you've taken away the power.
For example, take an implementation of exp(x). The exponential function is defined as the sum to infinity of (x^n)/n!. This could be implemented recursively as exp(x,n) = 1+(x/n)exp(x,n+1). The challenge is to figure out the value (or criteria) for what value of n to cut this infinite evaluation short. Well, once n is larger than x all future terms will be multiplied by a factor less than 1. So pick some proportionality constant k such that if x is k times smaller than n (that is, x * k < n) then the remainder is presumed negligible and the function returns 1.
Another really nice example I know involves finding the resistance across a simple but infinitely repeating circuit. At very large n, there are so many resistors in the way that the contribution of the remaining circuit connected in parallel is basically nothing. So pick whatever value of net resistance R_N for an arbitrary cut off point N, then recursively find the net resistance in the circuit up to point N-1 connected in parallel with everything remaining in the circuit after point N.
There are other cases I can think of where the base case is actually the function converging (or becoming linear) for large rather than small inputs. And still other cases I know of where the function has more than one input argument, and thus it might make sense to increase the size in one input to reduce it in another etc.
Oh, well, I don't like that, sorry. Recursion is a syntactic property but what you explain here is an algorithmic one. Unbounded recursion, non-terminating recursion and left-recursion are still recursion. My intuition is that you should teach termination and recursion as different concepts and help students understand that they are not the same. So that they learn to avoid non-terminating recursion that is.
Btw I think what you describe is called a Knuth-Bendix ordering.
Get your base case right and you're 1/n th of the way there. Get your recursive calls right and you've got the other (n-2)/n th right.
Then you only need to account for the off by one error and you're done.
:)
Assume the node is something like this:
Again, at this point the program is not complete but we've identified the two base cases. The next step is to figure out the recursive step(s). In this case there are two recursive steps but only one will be taken (if we aren't at the base cases yet): go left or go right. If your data structure or problem has a recursive structure, you can use this approach.Why list it on Amazon for less? Shouldn't the direct from publisher be lower since they don't have to pay Amazon? Plus Amazon throws in free one day shipping.
What's the economics behind this?
The author was vocal on Reddit defending the suspension of Tim Peters. I'm not sure if he ever has contributed anything substantial to Python itself. Implicitly defending slander and ostracism is vile, so avoid this one.