One especially important point:
And even assuming that something like fMRI lie detection worked and could be administered to an unwilling participant -- what's so much worse about that than, say, regular lie detection? We've already decided as a society that we're okay with the idea of a lie detector (so comfortable, in fact, that we don't care that the ones we already have don't really work). Why would we be uncomfortable with a lie detector that simply utilized a different technology?
The bigger problem, it seems to me, is if these new technologies are as flawed as (or worse than) current technologies, yet we trust them anyway. People tend to trust anything that involves a picture of a brain scan, regardless of its validity. Add that to the usual terrible job jurors do, and we've got a recipe for a new generation of faulty convictions.