ADC input impedance on MCUs
MCU ADC inputs can experience variable input impedance depending on whether the sample-and-hold cap is connected to the pin or not. It might be worth the trouble to use an op amp to buffer the signal. The op amp would have the added benefit of allowing you to filter out frequencies above Nyquist, which is also good practice.
Input Leakage Current
To determine your resistors voltage drop from the gate you need to use the leakage current from the datasheet. Microchip specifies an "Input Leakage Current" on their datasheets. The [datasheet that I have looked up][1] specifies an input leakage current of 1uA. This could cause a .1V or 100mV, which is only double what Robert calculated, probably not a problem on your signal.Now remember, if you are dividing a 30V signal down to 30/11 (2.7v) volts full read then the 100mV is added to this, causing up to 3% error on your 30V signal.
If you need a resolution of 1V, divide that by 11 and then add the 100mV. This 100mV could be larger than the 1V signal.
Input Capacitance
Robert is correct, there will be a capacitance, but this really specifies an amount of time that is needed to take the ADC measurement. This also, combined with the input resistance you chose, creates a low pass filter; if you were wanting to measure signals with a higher frequency, you are not going to be able to capture them.Reducing the error
The easiest way is to either reduce your resistance on your divider, or to buffer your signal. When you buffer the signal you will replace the PIC's leakage current with your op-amps leakage current which you can get quite low.
This 1uA is a worst case, unless it costs you a large amount to make minor changes to the design, fab your design and test how bad it is for you.
Please let me know if there is anything I can do to make this easier to read.
One point not yet mentioned is switched capacitance on the input. Many ADCs will connect a capacitor to the input while they take a measurement and then disconnect it sometime later. The initial state of this cap may be the last voltage measured, VSS, or something inconsistent. For accurate measurement, it is necessary that the input either not budge when the capacitance is connected, or that it bounce and recover before the capacitor is disconnected; in practice, this means that either the capacitance on the input must be above a certain value, or else that the RC time formed by the input capacitance and source impedance must be below a certain value.
Suppose, for example, that the switched input capacitance is 10pF, and the acquisition time is 10uS. If the input impedance is 100K, there is no input capacitance other than the capacitance of the ADC, and the difference between the starting cap voltage and the voltage to be measured is R, then the RC time constant will be 1uS (10pF * 100K), so the acquisition time will be 10 RC time constants, and the error will be R/exp(10) (about R/22,000). If R might be the full-scale voltage, then the error will be a problem for 16-bit measurements, but not for 12-bit measurements.
Suppose there were 10pF of capacitance on the board in addition to the 10pF of switched capacitance. In that case, initial error would be cut in half, but the RC time constant would be doubled. Consequently, the error would be R/2/exp(5) (about R/300). Barely good enough for 8-bit measurement.
Increase the capacitance a little more and things get even worse. Push the capacitance to 90pF and the error would be R/10/exp(1) (about R/27). On the other hand, if the cap gets much bigger than that, the error will go back down. With a capacitance of 1000pF, the error would be about R/110; at 10,000pF (0.01uF), it would be about R/1000. At 0.1uF, it would be about R/10,000, and at 1uF, it would be about R/100,000.