National Math Festival 2017

There was mathematical mayhem in DC on Saturday!

Did you miss it? Let me try to capture the day with some photos:

That’s just ONE room, just one part of a very large and increasingly popular National Math Festival.

This was the second festival which is held every two years (alternating with the the US Science and Engineering Festival). The festival was a huge success and was very well attended. I was a little cautious about attendance predictions, given that the festival moved to the convention center from the DC Mall–a location which benefited from wandering foot-traffic.

This year, however, we benefited from the rain. It was dark and rainy all day long, but the National Math Festival provided a wonderful rainy-day escape from the dreary weather. See? Look at all the fun we’re having!

The photos you’re seeing here are all from the travelling exhibits brought to us by the Museum of Mathematics in NYC. I helped MoMATH coordinate volunteers this year, just as I did two years ago. And our volunteers were AWESOME!

We engaged thousands of people throughout the course of the day in meaningful mathematical play. There is a great need for this kind of popular-focus on mathematics, illuminating the beauty, joy, and fun of mathematics, rather than the impression people have of difficulty and drudgery.

All my photos are MoMATH-focused, since that’s where I spent my day. You can find even more of my photos here. And you can see more coverage in my twitter feed. For example, here’s a little clip of some juggling-math:


Did you miss this year’s festival? Mark your calendars for April 2019 and make it a priority!


2017 Pi Day Puzzle Hunt Recap

Imagine 150 teens sleuthing around the school solving puzzles, skipping lunch every day to gain advantages over other teams, students voluntarily solving extremely difficult puzzles.

Welcome to the Third Annual RMHS Pi Day Puzzle Hunt. This year 36 teams competed for $200 in prize money, trophies and swag, and of course, GLORY. 🙂

There were eight challenging puzzles this year. A mural maze had students visiting other murals throughout the school in order to obtain the URL that gained them access to the next puzzle. The puzzles took students online, to classrooms, lockers, and making phone calls. Teams also received a UV light during the hunt in order to reveal secret messages (or cryptograms that still required decryption!). This year we did a better job of making the puzzles start out easy and slowly get more difficult, so as not to discourage teams right away. Here are links to descriptions of all of the 2017 puzzles:

Each year we have tried to improve the hunt in substantial ways, including the appearance of “Stars” throughout the hunt that earned students extra points by rewarding teams that could find hidden elements of puzzle or solve daily bonus puzzles. We also made the prize money and trophies better this year.

We had some bumps in the road, but overall, the 2017 hunt was a success. Months of work, and now our third puzzle hunt is in the books.

For more details, including photos, videos, and the puzzles, visit the Pi Day Puzzle Hunt Website.

See you next year, kids!

Area models for multiplication throughout the K-12 curriculum

Let’s take a look at area models, shall we?

My thesis today is that area models should be ubiquitous across the entire curriculum because mathematics is a sense making discipline. As math educators, we ought to encourage our students to take every opportunity to visualize their mathematics in an effort to illuminate, explain, prove, and bring intuition.

So let’s take a walk through the K-12 math curriculum and highlight the use of area models as they might apply to arithmeticalgebra, and calculus.



Students experience area models for the first time in elementary school as they work to visualize multi-digit multiplication. This can also be used for division as well, just running the logic in reverse–that is, seeking an unknown “side length” rather than an unknown area. And Base Ten Blocks can be used to help students understand the building blocks of our number system.

Here’s how you might work out 27\times 54:

27\times 54 = (20+7)(50+4)=(20)(50)+(20)(4)+(7)(50)+(7)(4)


27\times 54=1000+80+350+28=1458

The advantage of using a visual model like this is that you can easily see your calculation and explain why constituent calculations, taken together, faithfully produce the desired result. If you do a “man on the street” interview with most users or purveyors of the standard algorithm, you would almost certainly not get crystal clear explanations for why it produces results. For a further discussion of area models for multi-digit multiplication, see this article, or read Jo Boaler’s now famous book Mathematical Mindsets.


In middle school, as students first encounter algebra, they may use area models to support their algebraic reasoning around multiplying polynomials. And in an Algebra 2 course they may learn about polynomial division and support their thinking using an area model in the same way they used area models to do division in elementary school. Here Algebra Tiles can be used as physical manipulatives to support student learning.

Here’s how you might work out (x+4)(2x+3):




Notice also that if you let x=10, you obtain the following result from arithmetic:

14\times 23 = 200+110+12=322

The Common Core places special emphasis on making such connections. I agree with this effort, even though I can also commiserate with fellow math teachers who say things like, “My Precalculus students still use the box method for multiplying polynomials!” We definitely want to move our students toward fluency, but perhaps we should wait for them to realize that they don’t need their visual models. Eventually most students figure out on their own that it would be more efficient to do without the models.


Later in high school, as students first study calculus, area models can be used to bring understanding to the Product Rule–a result that is often memorized without any understanding. Even the usual “textbook proof” justifies but does not illuminate.

Here’s an informal proof of the Product Rule using an area model:

The “change in” the quantity L\cdot W can be thought of as the change in the area of a rectangle with side lengths L and W. That is, let A=LW. As we change L and W by amounts \Delta L and \Delta W, we are wondering how the overall area changes (that is, what is \Delta A?).

If the side length L increases by \Delta L, the new side length is L+\Delta L. Similarly, the width is now W+\Delta W. It follows that the new area is:

A+\Delta A=(L+\Delta L)(W+\Delta W)=LW+L\Delta W+W\Delta L+\Delta L\Delta W


Keeping in mind that A=LW, we can subtract this quantity from both sides to obtain:

\Delta A=L\Delta W+W\Delta L+\Delta L\Delta W

Dividing through by \Delta x gives:

\frac{\Delta A}{\Delta x}=L\cdot\frac{\Delta W}{\Delta x}+W\cdot\frac{\Delta L}{\Delta x}+\frac{\Delta L}{\Delta x} \frac{\Delta W}{\Delta x} \Delta x

And taking limits as \Delta x\to 0 gives the desired result:



If you’re like me, you once looked down on area models as being for those who can’t handle the “real” algebra. But if we take that view, there’s a lot of sense-making that we’re missing out on. Area models are an important tool in our tool belt for bringing clarity and connections to our math students.

Okay, so last question: Base Ten Blocks exist, and Algebra Tiles exist. What do you think? Shall we manufacture and sell Calculus DX Tiles © ? 🙂

I’m back

Hey everyone.

I took a two year hiatus from blogging. Life got busy and I let the blog slide. I’m sorry.

But I’m back, and my New Year’s Resolution for 2017 is to post at least once a month!


Here’s what I’ve been up to over the last two years:

  • Twitter. When people ask why I haven’t blogged, I say “twitter ate my blog.” It’s true. Twitter keeps feeding me brilliant things to read, engaging me in wonderful conversations, and providing the amazing fellowship of the MTBoS.
  • James Key. I consistently receive mathematical distractions from my colleague and friend, James, who has a revolutionary view on math education and a keen love for geometry. This won’t be the last time I mention his work. Go check out his blog and let’s start the revolution.

    with my nerdy friends named James

    with my nerdy friends named James

  • My Masters. I finally finished my 5-year long masters program at Johns Hopkins. I now have a MS in Applied and Computational Mathematics…whatever that means!
  • Life. My wife and I had our second daughter, Heidi. We’re super involved in our church. I tutor two nights a week. Sue me for having a life! 🙂
family photo

family photo

  • New curriculum. In our district, like many others, we’ve been rolling out new Common Core aligned curriculum. This has been good for our district, but also a monumental chore. I’m a huge fan of the new math standards, and I’d love to chat with you about the positive transitions that come with the CCSS.
  • Curriculum development. I’ve been working with our district, helping review curriculum, write assessments, and I even helped James Key make some video resources for teachers.
  • Books. Here are a few I’ve read in the last few months: The Joy of x, Mathematical Mindsets, The Mathematical Tourist, Principles to Actions
  • Math Newsletters. Do you get the newsletters from Chris Smith or James Tanton (did you know he’s pushing three essays on us these days?). Email these guys and they’ll put you on their mailing list immediately.
  • Growing. I’ve grown a lot as a teacher in the last two years. For example, my desks are finally in groups. See?
my classroom

my classroom

  • Pi day puzzle hunt! Two years ago we started a new annual tradition. To correspond with the “big” pi-day back in 2015, we launched a giant puzzle hunt that involves dozens of teams of players in a multi-day scavenger hunt. Each year we outdo ourselves. Check out some of the puzzles we’ve done in the last two years.
  • Quora. This question/answer site is awesome, but careful. You’ll be on the site and an hour later you’ll look up and wonder what happened. Here are some of the answers I’ve written recently, most of which are math-related. I know, I know, I should have been pouring that energy into blog posts. I promise I won’t do it again.
  • National Math Festival. Two years ago we had the first ever National Math Festival on the mall in DC. It was a huge success. I helped coordinate volunteers for MoMATH and I’ll be doing it again this year. See you downtown on April 22!
famous mathematicians you might run into at the National Math Festival

famous mathematicians you might run into at the National Math Festival

Now you’ll hopefully find me more regularly hanging out here on my blog. I have some posts in mind that I think you’ll like, and I also invited my colleague Will Rose to write some guest posts here on the blog. Please give him a warm welcome.

Thanks for all the love and comments on recent posts. Be assured that Random Walks is back in business!

Proving identities – what’s your philosophy?

What happens in your classroom when you give students the following task?

Prove 1+\frac{1}{\cos{\theta}}=\frac{\tan^2{\theta}}{\sec{\theta}-1}.

Sometimes the command is Verify or Show instead of Prove, but the intent is the same.


Two non-examples

Here are two ways that a student might work the problem.

Method 1






Method 2





How do you feel about these methods? In my opinion, both methods represent a fundamental misunderstanding of the prompt. Method 1 is especially grotesque, but Method 2 also leaves a lot to be desired. Let me explain. And if you think the above methods are perfectly fine, please be patient and hear me out.

This is the crux of the issue:

The prompt was to prove the statement. But if the first line of our work is the very thing we’re out to prove, then we are already assuming the thing we want to prove. We’re Begging the Question.

It’s as if someone demands,

“Prove Statement X, please!”

and we reply,

“Well, let’s first start by assuming that Statement X is true.”

This is nonsense.

What went wrong?

So what is the proper way to engage this proof? Let’s roll back a bit.

The error in these approaches seems to stem from a desire to perform algebraic operations on both sides of an equation in the same way that you might if you were solving an equation.

When we “do algebra” and write Equation B below another Equation A without any words, we always mean that Equation A implies Equation B. That is, when we write

Equation A

Equation B

Equation C


we mean that Equation C follows from Equation B, which follows from Equation A.

Some might claim that each line should be equivalent to the last. But, again, when we “do algebra” by performing algebraic manipulations to both sides of an equation to transform it from equation A into equation B, we always mean A\Rightarrow B, we don’t mean A\iff B. Take, for example, the following algebra which results in an extraneous solution:






x=2 \text{ or } x=-1

In this example, each line follows from the previous, however reversing the logic doesn’t work. But we accept that this is the usual way we do algebra (A\Rightarrow B\Rightarrow C\Rightarrow \cdots). Here the last line doesn’t hold because only one solution satisfies the original equation (x=2). Remember that our logic is still flawless, though. Our logic just says that IF \sqrt{x+2}=x for a given xTHEN (\sqrt{x+2})^2=x^2.

As we move through the algebra line by line, we either preserve the solution set or increase its size. In the case above, the solution set for the original equation is {2}, and as we go to line 2 and beyond, the solution set is {2,-1}.

For more, James Tanton has a nice article about extraneous solutions and why they arise, which I highly recommend.

So if this is the universal way we interpret algebraic work, which is what I argue, then it is wrong to construct an argument of the form A\Rightarrow B\Rightarrow C in order to prove statement A is true from premise C. The argument begs the question.

Both Method 1 and Method 2 make this mistake.


How does a proof go again?

I want to actually make a more general statement. The argument I gave above regarding how we “do algebra” is actually how we present any sort of deductive argument. We always present such an argument in order, where later statements are supported by earlier statements.

ANY time we see a sequence of statements (not just equations) A, B, C that is being put forward as a proof, if logical connectives are missing, the mathematical community agrees that “\Rightarrow” is the missing logical connection.

That is, if we see the proof A,B,C as a proof of statement C from premise A, we assume that the argument really means A\Rightarrow B\Rightarrow C.

This is usually the interpretation in the typical two-column proof, as well. We just provide the next step with a supporting theorem/definition/axiom, but we don’t also go out of our way to say “oh, and line #7 follows from the previous lines.”

Example: Given a non-empty set E with lower bound a and upper bound b, show that a\leq b.

1. E is non-empty and a and b are lower and upper bounds for E. (given)
2. Set E contains at least one element x. (definition of non-empty)
3. a\leq x and x\leq b. (definitions of lower and upper bound)
4. a\leq b. (transitive property of inequality)

Notice I never say that one line follows from the next. And also notice that it would be a mistake to interpret the logical connectives as biconditional.

The path of righteousness

I encourage my students to work with only ONE side of the expression and manipulate it independently, in its own little dark box, and when it comes out into the light, if it looks the same as the other side, you’ve proved the equivalence of the expressions.

For example, to show that \log\left(\frac{1}{t-2}\right)-\log\left(\frac{10}{t}\right)=-1+\log\left(\frac{t}{t-2}\right) for t>2, I would expect this kind of work for “full credit”:

\text{LHS }=\log\left(\frac{1}{t-2}\right)-\log\left(\frac{10}{t}\right)



= -1 + \log\left(\frac{t}{t-2}\right)

=\text{ RHS}

Interestingly, I WOULD also accept an argument of the form A\iff B\iff C as justification for conclusion A from premise C, but I would want a student to say “A is true if and only if B is true, which is true if and only if C is true.” Even though it provides a valid proof, I discourage students from using this somewhat cumbersome construction.

So let’s return to the original problem and show a few ways a student could do it correctly.

Three examples

Method A – A direct proof by manipulating only one side






Method B – A proof starting with a known equality






Method C – Carefully specifying biconditional implications


\text{if and only if}


\text{if and only if}


\text{if and only if}


\text{if and only if}


While all of these are now technically correct, I think we all prefer Method A. The other methods are cool too. But please, please, promise me you won’t use Methods 1 or 2 which I presented in my introduction.

In conclusion

Some might argue that the heavy criticism I’ve leveled against Methods 1 and 2 is nitpicking. But I disagree. This kind of careful reasoning is exactly the business of mathematicians. It’s not good enough to just produce “answers,” our job is to produce good reasoning. Mathematics, remember, is a sense-making discipline.

Thanks for staying with me to the end of this long-winded post. Can you tell I’ve had this conversation with a lot of students over the last ten years?

Further reading

  1. Dave Richeson has a similar rant with a similar thesis here.
  2. This article was originally inspired by this recent post on Patrick Honner’s blog. A bunch of us fought about this topic in the comments, and in the end, Patrick encouraged me to write my own post on the subject. So here I am. Thanks for pushing me in the right direction, Mr. Honner!


What does it mean to truly prove something?

Let me point you to the following recent blog post from Prof Keith Devlin, entitled “What is a proof, really?”

After a lifetime in professional mathematics, during which I have read a lot of proofs, created some of my own, assisted others in creating theirs, and reviewed a fair number for research journals, the one thing I am sure of is that the definition of proof you will find in a book on mathematical logic or see on the board in a college level introductory pure mathematics class doesn’t come close to the reality.

For sure, I have never in my life seen a proof that truly fits the standard definition. Nor has anyone else.

The usual maneuver by which mathematicians leverage that formal notion to capture the arguments they, and all their colleagues, regard as proofs is to say a proof is a finite sequence of assertions that could be filled in to become one of those formal structures.

It’s not a bad approach if the goal is to give someone a general idea of what a proof is. The trouble is, no one has ever carried out that filling-in process. It’s purely hypothetical. How then can anyone know that the purported proof in front of them really is a proof?


Click the link to read the rest of the article. Also read the comments below the article to see what conversation has already been generated.

I won’t be shy in saying that I disagree with Keith Devlin. Maybe I misunderstand the subtle nuance of his argument. Maybe I haven’t done enough advanced mathematics. Please help me understand.

Devlin says that proofs created by the mathematical community (on the blackboard, and in journals) are informal and non-rigorous. I think we all agree with him on this point.

But the main point of his article seems to be that these proofs are non-rigorous and can never be made rigorous. That is, he’s suggesting that there could be holes in the logic of even the most vetted & time-tested proofs. He says that these proofs need to be filled in at a granular level, from first principles. Devlin writes, “no one has ever carried out that filling-in process.”

The trouble is, there is a whole mathematical community devoted to this filling-in process. Many high-level results have been rigorously proven going all the way back to first principles. That’s the entire goal of the metamath project. If you haven’t ever stumbled on this site, it will blow your mind. Click on the previous link, but don’t get too lost. Come back and read the rest of my post!

I’ve reread his blog post multiple times, and the articles he linked to. And I just can’t figure out what he could possibly mean by this. It sounds like Devlin thoroughly understands what the metamath project is all about, and he’s very familiar with proof-checking and mathematical logic. So he definitely isn’t writing his post out of ignorance–he’s a smart guy! Again, I ask, can anyone help me understand?

I know that a statement is only proven true relative to the axioms of the formal system. If you change your axioms, different results arise (like changing Euclid’s Fifth Postulate or removing the Axiom of Choice). And I’ve read enough about Gödel to understand the limits of formal systems. As mathematicians, we choose to make our formal systems consistent at the expense of completeness.

Is Devlin referring to one of these things?

I don’t usually make posts that are so confrontational. My apologies! I didn’t really want to post this to my blog. I would have much rather had this conversation in the comments section of Devlin’s blog. I posted two comments but neither one was approved. I gather that many other comments were censored as well.

Here’s the comment I left on his blog, which still hasn’t shown up. (I also left one small comment saying something similar.)

Prof. Devlin,

You said you got a number of comments like Steven’s. Can you approve those comments for public viewing? (one of those comments was mine!)

I think Steven’s comment has less to do with computer *generated* proofs as it does with computer *checked* proofs, like those produced by the community.

There’s a big difference between the proof of the Four Color Theorem, which doesn’t really pass our “elegance” test, and the proof of e^{i\pi}=-1 which can be found here:

A proof like the one I just linked to is done by humans, but is so rigorous that it can be *checked* by a computer. For me, it satisfies both my hunger for truth AND my hunger to understand *why* the statement is true.

I don’t understand how the metamath project doesn’t meet your criteria for the filling in process. I’ll quote you again, “The trouble is, no one has ever carried out that filling-in process. It’s purely hypothetical. How then can anyone know that the purported proof in front of them really is a proof?”

What is the metamath project, if not the “filling in” process?


If anyone wants to continue this conversation here at my blog, uncensored, please feel free to contribute below :-). Maybe Keith Devlin will even stop by!