Essentially ... the database contains two post tables. One of actual posts, and a second of recent posts that have already been parsed (had their bbcode turned into html so they're ready for output). This eliminates the overhead of parsing bbcode on the fly.
However, there is still the overhead of having a JOIN clause in all of the SQL queries from the post table for the post_parsed table. Additionally, as new posts are viewed, the post_parsed table needs to get populated on the fly.
What I actually went ahead and did (and have been working on) is replicating this post_parsed table within the filesystem. This eliminates the need to do a join on the post_parsed table when doing a post sql query (and because both of these two tables are enormous, it alleviates strain on the database server). Additionally, because two versions of each post (parsed and not parsed) now don't need to be sent back, the bandwidth between the web and file servers is reduced. It also reduces the number of queries on each thread view by one because the post_parsed table no longer needs to be updated for new posts that haven't already been its cache. Additionally ... because I am now storing each parsed post as its own .php file in the file system, eAccelerator (which stores compiled versions of php files in ram) picks up on all of them. What this means is I'm essentially storing parsed, ready-for-output versions of all posts in ram on the web server.
Unfortunately ... there have been some kinks along the way. Such as my php-based cache not updating all the time when a post was edited (now fixed per another thread), and now my php-based cache not appropriately adding and stripping html entities and slashes (now fixed as well, I hope).