How do you expand √(a+b)?

This is a question that was recently asked on Quora:

it’s easy to expand
(a+b)^2 = a^2+2ab+b^2 or
(a+b)^3=a^3+3ab^2+3a^2b+b^3
or some other (a+b)^n but what about (a+b)^{1/2} aka. \sqrt{a+b}

Here’s my answer:

Just have Wolfram|Alpha do it for you :-).

But if you were on a desert island without access to Wolfram Alpha, here’s how you might think it through:

Are you already comfortable with the Binomial Theorem? Here it is again, but stated in a particular way that I think we’ll like.

\left(x+1\right)^r=1+rx+\frac{r(r-1)}{2!}x^2+\frac{r(r-1)(r-2)}{3!}x^3+\cdots

Look at it and make sure you understand it, and verify that it really is equivalent to the formulation of the Binomial Theorem you know.

Now, for the big trick. It turns out the above statement holds true not for just r=1,2,3,\ldots but for all real r. The only catch is that this often results in an infinite series. (These series results can also be obtained by Taylor expansion.)

In particular, it works for r=1/2:

\left(x+1\right)^{1/2}=1+\frac{1}{2}x+\frac{1/2(1/2-1)}{2!}x^2+\cdots

\left(x+1\right)^{1/2}=1+\frac{1}{2}x-\frac{1}{8}x^2+\frac{1}{16}x^3+\cdots

Now, rewriting your original expression (a + b)^{\frac{1}{2}} as \sqrt{b}\left(a/b+1\right)^{1/2} gives

\sqrt{b}\left(1+\frac{1}{2}\left(\frac{a}{b}\right)-\frac{1}{8}\left(\frac{a}{b}\right)^2+\frac{1}{16}\left(\frac{a}{b}\right)^3+\cdots\right)

=\sqrt{b}+\frac{a}{2\sqrt{b}}-\frac{a^2}{8b^{3/2}}+\frac{a^3}{16b^{5/2}}+\cdots

which is the same result Wolfram Alpha will spit back.

Hope that helps!

Advertisements

Arithmetic/Geometric Hybrid Sequences

Here’s a question that the folks who run the NCTM facebook page posed this week:

Find the next three terms of the sequence 2, 8, 4, 10, 5, 11, 5.5, …

Feel free to work it out. I’ll give you a minute.

Done?

still need more time?

..

give up?

Okay. The answer is 11.5, 5.75, 11.75.

The pattern is interesting. Informally, we might say “add 6, divide by 2.” This is an atypical kind of sequence, in which it seems as though we have two different rules at work in the same sequence. Let’s call this an Arithmetic/Geometric Hybrid Sequence. (Does anyone have a better name for these kinds of sequences?)

But a deeper question came out in the comments: Someone asked for the explicit rule. After a little work, I came up with one. I’ll give you my explicit rule, but you’ll have to figure out where it came from yourself:

a_n=\begin{cases}6-4\left(\frac{1}{2}\right)^{\frac{n-1}{2}}, & n \text{ odd} \\ 12-4\left(\frac{1}{2}\right)^{\frac{n-2}{2}}, & n \text{ even}\end{cases}

More generally, if we have a sequence in which we add d, then multiply by r repeatedly, beginning with a_1, the explicit rule is

a_n=\begin{cases}\frac{rd}{1-r}+\left(a_1-\frac{rd}{1-r}\right)r^{\frac{n-1}{2}}, & n \text{ odd} \\ \frac{d}{1-r}+\left(a_1-\frac{rd}{1-r}\right)r^{\frac{n-2}{2}}, & n \text{ even}\end{cases}.

And if instead we multiply first and then add, we have the following similar rule.

a_n=\begin{cases}\frac{d}{1-r}+\left(a_1-d-\frac{rd}{1-r}\right)r^{\frac{n-1}{2}}, & n \text{ odd} \\ \frac{rd}{1-r}+\left(a_1-d-\frac{rd}{1-r}\right)r^{\frac{n}{2}}, & n \text{ even}\end{cases}.

And there you have it! The explicit formulas for an Arithmetic/Geometric Hybrid Sequence:-).

(Perhaps another day I’ll show my work. For now, I leave it the reader to verify these formulas.)

Integration by parts and infinite series

I was teaching tabular integration yesterday and as I was preparing, I was playing around with using it on integrands that don’t ‘disappear’ after repeated differentiation. In particular, the problem I was doing was this:

\int x^2\ln{x}dx

Now this is done pretty quickly with only one integration by parts:

Let u=\ln{x} and dv=x^2dx. Then du=\frac{1}{x}dx and v=\frac{x^3}{3}. Rewriting the integral and evaluating, we find

\int x^2\ln{x}dx = \frac{1}{3}x^3\ln{x}-\int \left(\frac{x^3}{3}\cdot\frac{1}{x}\right)dx

=\frac{1}{3}x^3\ln{x}-\int \frac{x^2}{3}dx

= \frac{1}{3} x^3 \ln{x} - \frac{1}{9} x^3 + c .

But I decided to try tabular integration on it anyway and see what happened. Tabular integration requires us to pick a function f(x) and compute all its derivatives and pick a function g(x) and compute all its antiderivatives. Multiply, then insert alternating signs and voila! In this case, we choose f(x)=\ln{x} and g(x)=x^2. The result is shown below.

\int x^2\ln{x}dx = \frac{1}{3}x^3\ln{x}-\frac{1}{12}x^3-\frac{1}{60}x^3-\frac{1}{180}x^3-\cdots

= \frac{1}{3}x^3\ln{x} - x^3 \sum_{n=0}^\infty \frac{2}{(n+4)(n+3)(n+2)(n+1)} +c

If I did everything right, then the infinite series that appears in the formula must be equal to \frac{1}{9}. Checking with wolframalpha, we see that indeed,

\sum_{n=0}^\infty \frac{2}{(n+4)(n+3)(n+2)(n+1)} = \frac{1}{9} .

Wow!! That’s pretty wild. It seemed like any number of infinite series could pop up from this kind of approach (Taylor series, Fourier series even). In fact, they do. Here are just three nice resources I came across which highlight this very point. I guess my discovery is not so new.

Awesome Math Baby Gear

Here is some awesome baby gear that you’ll get a kick out of, whether you have a little baby (like I do) or not. [ht: Carrie Gaffney]

Check out this one, for instance, perfect for the little Fermat in your life:

Or this one, with a slightly more ‘physics’ flavor–perfect for your little one that is constantly gaining momentum:

For the baby that’s two standard deviations above the mean:

Or how about this …

And here are some more for you:

By the way, the perfect place to collect random photos and other things you love is on Pinterest. I’ve been collecting pins for the last year or so, and you may want to check out these two boards of mine, at least:

Happy pinning!

Pi R Squared

[Another guest blog entry by Dr. Gene Chase.]

You’ve heard the old joke.

Teacher: Pi R Squared.
Student: No, teacher, pie are round. Cornbread are square.

The purpose of this Pi Day note two days early is to explain why \pi is indeed a square.

The customary definition of \pi is the ratio of a circle’s circumference to its diameter. But mathematicians are accustomed to defining things in two different ways, and then showing that the two ways are in fact equivalent. Here’s a first example appropriate for my story.

How do we define the function \exp(z) = e^z for complex numbers z? First we define a^b for integers a > 0 and b. Then we extend it to rationals, and finally, by requiring that the resulting function be continuous, to reals. As it happens, the resulting function is infinitely differentiable. In fact, if we choose a to be e, the \lim_{n\to\infty} (1 + \frac{1}{n})^n \, not only is e^x infinitely differentiable, but it is its own derivative. Can we extend the definition of \exp(z) \, to complex numbers z? Yes, in an infinite number of ways, but if we want the reasonable assumption that it too is infinitely differentiable, then there is only one way to extend \exp(z).

That’s amazing!

The resulting function \exp(z) obeys all the expected laws of exponents. And we can prove that the function when restricted to reals has an inverse for the entire real number line. So define a new function \ln(x) which is the inverse of \exp(x). Then we can prove that \ln(x) obeys all of the laws of logarithms.

Or we could proceed in the reverse order instead. Define \ln(x) = \int_1^x \frac{1}{t} dt . It has an inverse, which we can call \exp(x) , and then we can define a^b as \exp ( b \ln (a)). We can prove that \exp(1) is the above-mentioned limit, and when this new definition of a^b\, is restricted to the appropriate rationals or reals or integers, we have the same function of two variables a and b as above. \ln(x) can also be extended to the complex domain, except the result is no longer a function, or rather it is a function from complex numbers to sets of complex numbers. All the numbers in a given set differ by some integer multiple of

[1] 2 \pi i.

With either definition of \exp(z), Euler’s famous formula can be proven:

[2] \exp(\pi i) + 1 = 0.

But where’s the circle that gives rise to the \pi in [1] and [2]? The answer is easy to see if we establish another formula to which Euler’s name is also attached:

[3] \exp(i z) = \sin (z) + i \cos(z).

Thus complex numbers unify two of the most frequent natural phenomena: exponential growth and periodic motion. In the complex plane, the exponential is a circular function.

That’s amazing!

Here’s a second example appropriate for my story. Define the function on integers \text{factorial (n)} = n! in the usual way. Now ask whether there is a way to extend it to (some of) the complex plane, so that we can take the factorial of a complex number. There is, and as with \exp(z), there is only one way if we require that the resulting function be infinitely differentiable. The resulting function is (almost) called Gamma, written \Gamma. I say almost, because the function that we want has the following property:

[4] \Gamma (z - 1) = z!

Obviously, we’d like to stay away from negative values on the real line, where the meaning of (–5)! is not at all clear. In fact, if we stay in the half-plane where complex numbers have a positive real part, we can define \Gamma by an integral which agrees with the factorial function for positive integer values of z:

[5] \Gamma (z) = \int_0^\infty \exp(-t) t^{z - 1} dt .

If we evaluate \Gamma (\frac{1}{2}) we discover that the result is \sqrt{\pi} .

In other words,

[6] \pi = \Gamma(\frac{1}{2})^2 .

Pi are indeed square.

That’s amazing!

I suspect that the \pi arises because there is an exponential function in the definition of \Gamma, but in other problems involving \pi it’s harder to find where the \pi comes from. Euler’s Basel problem is a good case in point. There are many good proofs that

1 + \frac{1}{2^2} + \frac{1}{3^2} + \frac{1}{4^2} + ... = \frac{\pi^2}{6}

One proof uses trigonometric series, so you shouldn’t be surprised that \pi shows up there too.

\pi comes up in probability in Buffon’s needle problem because the needle is free to land with any angle from north.

Can you think of a place where \pi occurs, but you cannot find the circle?

George Lakoff and Rafael Núñez have written a controversial book that bolsters the argument that you won’t find any such examples: Where Mathematics Comes From. But Platonist that I am, I maintain that there might be such places.

Non-repeating sequences

What a fascinating question: can you create a sequence without any repetition? Randomness won’t do, since clumping will occur. It turns out that finding non-repeating sequences has important applications to sonar. If there’s any repetition in the sequence of sounds transmitted, when the signal returns, parts of the signal can be confused because there’s internal similarity. Watch the talk for the whole story, and enjoy the ‘ugliest piece of music’ at the end! 🙂

Why are infinite series so hard to grasp?

I’ve posted on infinite series a few times before. But I was inspired to touch on the topic again because I saw this post, yesterday, over at the Math Less Traveled. Actually, the post isn’t really about infinite series as much as it is about p-adic numbers and zero divisors. I’m excited to read more from Brent on this subject. But I digress.

The point I want to make with this post is that students struggle with wrapping their minds around convergent infinite series, and yet they live with them all the time. Students have inconsistently held beliefs about infinite sums.

The simplest convergent series is a geometric series \sum_{n=1}^\infty a_n=a_1r^{n-1} which converges to \frac{a_1}{1-r}. The easy proof of this fact goes like this: we look at the sum formula for a finite geometric series, s_n=\frac{a_1(1-r^n)}{1-r} and we notice that

\lim_{n\to\infty}\frac{a_1(1-r^n)}{1-r}=\frac{a_1}{1-r}

for |r|<1.

But this proof isn’t very satisfying for the student encountering infinite series for the first time ever. Evaluating the limit feels like ‘magic.’ The idea of adding up an infinite amount of things and getting a finite value is unsettling. I admit, it sounds like quite a lot to swallow. That being said, however, students have no problem declaring the infinite series

0.3 + 0.03 + 0.003 + 0.0003 + \cdots

to be 1/3. It’s not “close to” 1/3, it’s not “approaching” 1/3, it IS EQUAL TO 1/3. And my Precalculus students already accept this as fact. So without even thinking about it, they’ve been living with convergent infinite series all along. Hah!

Once they finally shake their denial, they can more easily accept the convergence of other infinite series like \sum_{n=1}^\infty \frac{1}{n^2}=\frac{\pi^2}{6}. At first when students encounter a series like this, they think, “surely we can’t say the sum is EQUAL to \frac{\pi^2}{6}. It must be close to \frac{\pi^2}{6} or approach it, but equal to?” But the same students make no such distinction with 0.3+0.03+0.003+\cdots = \frac{1}{3}.

So there it is. An inconsistently held belief about infinite sums. To the student: You cannot have it both ways. Either you must agree with, or deny, both of the following equations:

0.3+0.03+0.003+\cdots = \sum_{n=1}^\infty 0.3(0.1)^{n-1}=\frac{1}{3}

1+\frac{1}{4}+\frac{1}{9}+\frac{1}{16}+\cdots = \sum_{n=1}^\infty \frac{1}{n^2}=\frac{\pi^2}{6}

But to believe one equation is true and the other is only ‘kind of’ true is inconsistent. I rest my case. 🙂