On Fri, Jan 23, 2009 at 11:09 AM, Juan Pedro Steibel
wrote:
> Hello all,
> This is my first post in the list as I just started to use lmer for my
> routine analyses (side by side with P.Mixed... just for now 8^D ).
>
> The following comment caught my attention:
>>
>> I have not verified the results from the current mcmcsamp, even for
>> simple Gaussian models. They seem reasonable for these models but I
>> need to look at them much more closely before I could advise trusting
>> those results.
>>
>> The problem with designing an mcmcsamp method is that the variances of
>> the random effects can legitimately be zero and often have a
>> nonnegligible probability of assuming the value of zero during the
>> MCMC iteraions. However, most methods of sampling from the
>> distribution of a variance are based on sampling from the distribution
>> of a multiplier of the current value. If the current value is zero,
>> you end up stuck there.
>>
>
> If I understand correctly, it is claimed that once the Markov Chain hits
a
> value 0 for a given VC, it stays there. Is this Correct? Should I
interpret
> the statement above differently?
>
> This is not the behavior I am observing in mcmcsamp, fitting models with
a
> nonnegligible posterior probability of certain VC=0, I see the chain
> hitting zero for a while, then leaving (VC>0) and back... the actual
mixing
> is very good, even for a VC that is estimated as zero by REML (posterior
> mode of mcmc is ~practically~ zero). Thanks in advance
I should have been more careful in what I wrote. I meant to say that
dealing with the possibility of having a zero variance component is
the reason that it is difficult to formulate MCMC methods for these
models. The straightforward method doesn't work.
The mcmcsamp function doesn't use the straightforward sampling method
for the variance components. It uses an indirect method that allows
it to visit zero without getting stuck.
