I mentioned in a previous post that a career in Software Development or any part of Information Technology is a career of lifelong learning. This would be true even if technology did not change so rapidly for the simple reason the more we solve with computers, the more problems we are able to solve. Computers allow us to do things not only faster, but smarter. This leaves us humans able to turn our attention to more and more opportunities.

After 35 years in software and IT I have become pretty confident in my ability to learn and take on new tasks. This is not because I know more than when I was younger, but because for 35 years I have refused to sit still. Instead, I continue to branch out and try new things. I have never been the “go to guy” for a specific technology. I am not the expert on all things SpiffySoft. Nor am I the local expert of the FrogSnot programming language. For 35 years I have worked in the cracks, seams and edges of the project life cycle. Where algorithms fail and best practices fall short; where others have abandoned all hope; any problems that makes others say “huh?”; these are the tasks I excel at.

As impressed as I am with myself, I was none the less reminded of the danger of over confidence. Fortunately, this was not for work. No client or schedule suffered due to this learning moment. All the same, there is nothing like a good intellectual ass-kicking to restore some humility to my life.

I just started reading Dr. Donald Knuth’s *“The Art of Computer Programming.”* Now, I do not have a Computer Science background so it is a good thing these books were actually designed as a first course in Computer Science. I expected this to translate to an erratic pace where some topics I would breeze right through while others might take a bit longer. In fact, the first volume begins with an explanation of what an “algorithm” even is, including a bit of history on the etymology of the word. As a way to demonstrate this he chose a very simple algorithm to use as a demonstration. His choice was Euclid’s Algorithm to determine the greatest common divisor of two positive integers.

Without explaining all the notation, some of which is unique to its appearance in Dr. Knuth’s book, I will present the algorithm here in the same manner:

Algorithm E(Euclid’s algorithm). Given two positive integersmandn, find theirgreatest common divisor, that is, the largest positive integer that evenly divides bothmandn.

E1.[Find remainder.] Dividembynand letrbe the remainder. (We will have $ 0 \leq r < n $ ).

E2.[Is it zero?] if $ r = 0, $ the algorithm terminates; $ n $ is the answer.

E3.[Reduce] Set $ m \leftarrow n, n \leftarrow r, $ and go back to step E1. $ \Bigg | $

My first thought upon reading this was “That is so cool! How is it I have never heard of this?” My second thought was *“huh?”*

In short, divide *m* by *n*. If the remainder (*r*) is not zero, set *m* equal to *n* and *n* equal to *r*. Rinse and repeat until you get a zero remainder.

Validating this works by example is really simple so I won’t detail that out. The part that got me was *why* it worked. I let this noodle around in my head for a bit, but it was getting late and I really wanted to read at least one more page. So I forged ahead to the explanation.

The explanation from the book went something like this:

After step E1, we have $$ m = qn +r, $$ for some integer

q. If $ r= 0 $ thenmis a multiple ofn, and clearly in such a casenis the greatest common divisor ofmandn.

“Okay,” I thought, I can live with that. Looking at *m* as a sum *n* times some integer plus the remainder was an unexpected zig when I zagged, but is clearly true. Next came the following:

If $ r \ne 0, $ note that any number that divides both

mandnmust divide $ m – qn = r, $ and any number that divides both n and r must divide $ qn + r = m; $

*“Huh…?”*

At this I point I was asking myself if I promised to read one more page, or one more paragraph. Because if it was just one paragraph I could quit now and go to bed…

No, not really. I realized I was not going to sleep until I could understand this in some way. What follows is my understanding of the above, without going out to the Internet for the answer. This won’t be written as a formal proof, it’s been too long since I’ve done that. (**ToDo:** Put *“A Transition to Advanced Mathematics”* book in the *Now Reading* pile.)

Here goes.

Let’s start with statement above $$ m = qn + r $$ Now we take some integer *x* that divides both *m* and *n* evenly. This gives us $$ \frac{m}{x} = \frac{qn + r}{x} $$ Rearranging a bit, we get $$ \frac{m}{x} = \frac{qn}{x} + \frac{r}{x} $$ $$ \frac{m}{x} – \frac{qn}{x} = \frac{r}{x} $$ $$ \frac{m}{x} – q \frac{n}{x} = \frac{r}{x} $$

Now here is where we logic it out. By definition, *x* already divides *m* and *n* evenly. That means on the left side of the equation we have $ m/x = $ *some integer*. Likewise, $ n/x = $ *some other integer*. *q* is already established as an integer. This means, in order for *x* to meet the conditions, the left side of the equation must evaluate to *some integer*, minus *q* (an integer) times *some other integer*. In other words, the entire left side of the equation must evaluate to *an integer*.

If this is true, we are left with *an integer* $ = r/x $ which means in order for *x* to evenly divide *m* and *n*, it must also evenly divide the remainder *r*!

A similar argument can show $$ qn + r = m $$ must resolve itself in a similar manner. This is basically what justifies step **E3** above and allows us to continue solving this in a systematic manner rather than the time honored RAG (random-assed-guess) method.

*Whew!* No I can finish the page, pour a drink and get some sleep…

## Be First to Comment