Should I use prepared statements for MySQL in PHP PERFORMANCE-WISE?
Not for all cases. There are times when prepared statements perform slower than a manually constructed query even though it's run repeatedly.
For example, when I need to insert hundreds of rows in one page load, I find:
$pdo -> query("INSERT INTO table (field1, field2, ...) VALUES (".$pdo -> quote($value1).", ".$pdo -> quote($value2).", ...), (row2), (row3), ..., (row100), ...");
A LOT faster than:
$stmt = $pdo -> prepare("INSERT INTO table (field1, field2, ...) VALUES (:value1, :value2, ...)");
foreach (/* some hundred times */) {
$stmt -> execute(array('value1' => $value1, 'value2' => $value2, ...));
}
UPDATE
I wrote this answer 4 years ago since then I've learned new things about this.
For the prepared statement that is repeatedly run hundreds of times to be fast, you have to wrap all the executions in one transaction.
However I never tested which is faster, non-prepared single batch INSERT with 1000 rows or 1000 executions of prepared INSERT wrapped in a transaction.
"Using prepared statements is never really a bad idea"
Not true. You can easily fill up the prepared statement cache and exceed max_prepared_stmt_count
if you don't know what you're doing, rendering your app useless until connections consuming the prepared statements are closed.
Prepared statements are specifically designed for tight inner loops in your business logic where you're going to be calling the same basic query over and over again, a good example being a parameterized query such as:
SELECT name, address, phone from tbl WHERE id = ?
and you have a different id on each call. That way, it's worth the extra round trip to the db for the prepare because you're probably going to call it hundreds or thousands of times and just change the parameter. But you're expected to remove the prepared statement from the cache or close the connection at the end of, say, your stateless script (php, perl, jsp, ruby, etc).
If you don't remove the prepared statement and you are using a connection pool, you are bound to fill up the cache over time and get a nasty error "Can't create more than max_prepared_stmt_count statements". I'm speaking from experience so think about whether you really need prepared statements because you know definitively that you're going to be repeating the same parameterized query over and over in a tight loop.
If not, what you're probably looking for is letting mysql use its basic query cache which is a different mechanism than the prepared statement list and which behaves like a true LRU cache from my understanding.
The History
This was my first Stackoverflow answer. A lot has changed since, specially the deprecation and removal of the mysql API. Even if you are still on php 5.6, the mysql_* api should not be used. Now PDO or mysqli are the only options to choose. PDO is better to lots of reasons.
Are prepared statements cached across page loads?
I've read some conflicting reports about what PHP's mysqli or PDO libraries do? Do either of them cache the prepared statement across script execution?
The same prepared statement will not be used in between page loads. It has to be prepared every time. If squeezing every large millisecond matters, a stored procedure might be a good idea (assuming you have a complicated query).
For large inserts (thousands of rows) A bigger boost can probably be gained by dumping your data into a text file and loading it with LOAD DATA IN FILE . It's a lot faster than a series of inserts.
The original answer
The truth of the matter is that sometimes mysqli is faster and at other times mysql api is faster. But the difference is really really small. If you look at any of the performance tests on the web the difference is really just 10 - 20 milliseconds. The best way to boost performance is to optimize table design.
Many of the tests that 'prove' the older api to be faster conveniently forget that for maximum security mysql_real_escape_string() should be called for each variable used in the query.
Queries are cached by the server, if and only if the data on all the tables that are used in the query have remained unchanged.
Await another update with actual numbers