The tool we will be using to give evidence of hardness is reductions. We've been using reductions all along to get positive results (algorithms). For example, in homework you reduced the MAJORITY problem to the SELECTION problem as a way to get an algorithm for MAJORITY. When you reduce problem A to B, an algorithm for B gives you an algorithm for A. If the reduction itself is efficient, then an efficient algorithm for B gives you an efficient algorithm for A.
In the theory we develop, we turn the use of reductions on its head. When we efficiently reduce A to B (written $A\le_P B$) not only do we show that if you can solve B well you can solve A well, but the converse: if you can't solve A by any reasonable algorithm then you can't solve B by any reasonable algorithm. So that's the strategy: to show that a problem B is hard find a problem A which you already believe to be hard, and then reduce to A to B. (Don't get the direction backwards!)
Where do you get this already-believed-to-be-hard problem? The usual way is to look it up in a book! (Show [Garey, Johnson]- a collection of believed-to-be hard computational problems, and discuss.)
Now we go on to show an example of carrying out this strategy. First we define Longest-Path as an optimization problem. Then we make a decision problem out of it and talked about using decision problems instead of optimization problems. We explained why, when giving negative results, this was a fine simplification to be making. We selected Longest-Path as our first example because the previous lecture was on Shortest-Path; the contrast is nice.
We described Ham-Path and showed that it reduces to Longest-Path. We tried to show Ham-Cycle reduces to Longest-Path, but this is a little bit tricky and I messed it up. I fixed this in discussion section later in the afternoon; see those notes.