-
Notifications
You must be signed in to change notification settings - Fork 35
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
mxStandardizeLISRELpaths #400
Comments
This is a fine idea, but I have no plans to do it in the next 1 to 10 years. There are about 300 lines in R/MxSummary.R for |
Is the LISREL structure still popular? I feel like it went out of use with the advent of Mplus. We don’t really have metrics for feature use, beyond counting the questions on the forums. They might be useful to get. |
I actually agree with @tbates , though I'm not in any hurry to implement this function. I do think it should be done, though. |
The algebras for what I think was a LISREL option SS for standardized solution are known, so perhaps it wouldn't be so difficult to implement. A student might be able to tackle it. |
I've not seen any use of it in papers, so suspect you're right. But my motive for the suggestion was another question: The RAM models people are making with |
Yeah, the big A and S matrices are not the most efficient, in general, even if (I-A) can be inverted efficiently. However, I don't think the LISREL formulation would be better for large models, particularly in Behavior Genetics (where most models get at least doubled in size). Indeed, the motivation for developing classic Mx, its combination of a matrix algebra interpreter and a numerical optimizer was largely motivated by the painful struggle to convert things like A C E into the one and only general LISREL formula. In the end, I think it most intuitive to use A and S matrices, which are simple and general, or to specify the model in terms of matrices that directly implement the way we think about the model. Our addition of algebraic derivatives for the RAM-type models is operational, but as yet has not been yielding very much improvement in speed in smallish models. Larger models might show a greater benefit. |
@lf-araujo might be able to test that: compare OpenMx pre and post algebraic derivatives. What OpenMx version did algebraic RAM come in on? I can't see any mention in the git notes: is this feature on a branch somewhere? PS: Did anyone have another guess then about what lavaan does for speed in its implementation? |
Yes, it is all in my branch, 'analytDerivs'.
That's already been tested. Automated analytic derivatives have been a huge disappointment...so much so that they will be switched OFF by default when 'analytDerivs' is merged into 'master'. |
There was some improvement with larger models, iirc. Since large RAM models are being developed for multivariate analyses, I haven't given up hope on it delivering a performance benefit. |
Yes--there was only one script from That's why I'm saying the new analytic RAM derivatives should be off by default. |
In an ideal world, we would by default switch them on when they are expected to help, and off when they are not. It might not take much more than noting the number of variables in the largest RAM model in the mxModel. It would be good not to sink lots of effort into something that doesn't deliver results, but sometimes there is nothing better than try it and see. I'd very much like derivatives for ordinal data analyses, where the cpu time required is orders of magnitude greater than for continuous data. |
Although interesting and valuable, this conversation is pretty far afield from |
I created #402 to carry on this off-topic conversation. |
I do not want this issue to remain closed as "not planned", because I DO indeed think that |
I think it would be good to add the math here, just for the record, and so that a user could implement it easily as an add-on function. |
We have
mxStandardizeRAMpaths
for RAM models.It would be great to add an
mxStandardizeLISRELpaths
function to supporttype = "LISREL"
mxModels
.The text was updated successfully, but these errors were encountered: