Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

mxStandardizeLISRELpaths #400

Open
tbates opened this issue Nov 19, 2024 · 15 comments
Open

mxStandardizeLISRELpaths #400

tbates opened this issue Nov 19, 2024 · 15 comments
Assignees

Comments

@tbates
Copy link
Member

tbates commented Nov 19, 2024

We have mxStandardizeRAMpaths for RAM models.
It would be great to add an mxStandardizeLISRELpaths function to support type = "LISREL" mxModels.

@mhunter1
Copy link
Contributor

This is a fine idea, but I have no plans to do it in the next 1 to 10 years. There are about 300 lines in R/MxSummary.R for mxStandardizedRAMPaths. I welcome anyone interested in this feature to write it.

@mcneale
Copy link
Contributor

mcneale commented Nov 19, 2024

Is the LISREL structure still popular? I feel like it went out of use with the advent of Mplus. We don’t really have metrics for feature use, beyond counting the questions on the forums. They might be useful to get.

@RMKirkpatrick
Copy link
Contributor

I actually agree with @tbates , though I'm not in any hurry to implement this function. I do think it should be done, though.

@mcneale
Copy link
Contributor

mcneale commented Nov 19, 2024

The algebras for what I think was a LISREL option SS for standardized solution are known, so perhaps it wouldn't be so difficult to implement. A student might be able to tackle it.

@tbates
Copy link
Member Author

tbates commented Nov 19, 2024

Is the LISREL structure still popular? I feel like it went out of use with the advent of Mplus. We don’t really have metrics for feature use, beyond counting the questions on the forums. They might be useful to get.

I've not seen any use of it in papers, so suspect you're right. But my motive for the suggestion was another question: The RAM models people are making with umxTwinMaker() expose the speed penalty for RAM vs. hand-crafted matrix implementations. I wondered if LISREL might offer a speed benefit, by virtue of the larger number of smaller matrices it creates. If not, then you're likely right... Is there any data comparing the same model in RAM and LISREL for run time?

@mcneale
Copy link
Contributor

mcneale commented Nov 19, 2024

Yeah, the big A and S matrices are not the most efficient, in general, even if (I-A) can be inverted efficiently. However, I don't think the LISREL formulation would be better for large models, particularly in Behavior Genetics (where most models get at least doubled in size). Indeed, the motivation for developing classic Mx, its combination of a matrix algebra interpreter and a numerical optimizer was largely motivated by the painful struggle to convert things like A C E into the one and only general LISREL formula. In the end, I think it most intuitive to use A and S matrices, which are simple and general, or to specify the model in terms of matrices that directly implement the way we think about the model.

Our addition of algebraic derivatives for the RAM-type models is operational, but as yet has not been yielding very much improvement in speed in smallish models. Larger models might show a greater benefit.

@tbates
Copy link
Member Author

tbates commented Nov 19, 2024

algebraic derivatives for RAM-type models is operational but as yet has not been yielding very much improvement in speed in smallish models. Larger models might show a greater benefit.

@lf-araujo might be able to test that: compare OpenMx pre and post algebraic derivatives. What OpenMx version did algebraic RAM come in on? I can't see any mention in the git notes: is this feature on a branch somewhere?

PS: Did anyone have another guess then about what lavaan does for speed in its implementation?

@RMKirkpatrick
Copy link
Contributor

What OpenMx version did algebraic RAM come in on? I can't see any mention in the git notes: is this feature on a branch somewhere?

Yes, it is all in my branch, 'analytDerivs'.

@lf-araujo might be able to test that: compare OpenMx pre and post algebraic derivatives.

That's already been tested. Automated analytic derivatives have been a huge disappointment...so much so that they will be switched OFF by default when 'analytDerivs' is merged into 'master'.

@mcneale
Copy link
Contributor

mcneale commented Nov 19, 2024

There was some improvement with larger models, iirc. Since large RAM models are being developed for multivariate analyses, I haven't given up hope on it delivering a performance benefit.

@RMKirkpatrick
Copy link
Contributor

RMKirkpatrick commented Nov 19, 2024

There was some improvement with larger models, iirc.

Yes--there was only one script from make nightly, models/nightly/startsTestMissing.R, for which analytic derivatives improved running time by more than 5 seconds and by more than 10%, and that was only with two of the three main optimizers. EDIT: I believe what I wrote in this paragraph is incorrect, because I was looking at the wrong set of results.

That's why I'm saying the new analytic RAM derivatives should be off by default.

@mcneale
Copy link
Contributor

mcneale commented Nov 19, 2024

In an ideal world, we would by default switch them on when they are expected to help, and off when they are not. It might not take much more than noting the number of variables in the largest RAM model in the mxModel.

It would be good not to sink lots of effort into something that doesn't deliver results, but sometimes there is nothing better than try it and see. I'd very much like derivatives for ordinal data analyses, where the cpu time required is orders of magnitude greater than for continuous data.

@mhunter1
Copy link
Contributor

Although interesting and valuable, this conversation is pretty far afield from mxStandardizeLISRELpaths.

@RMKirkpatrick
Copy link
Contributor

Although interesting and valuable, this conversation is pretty far afield from mxStandardizeLISRELpaths.

I created #402 to carry on this off-topic conversation.

@tbates tbates closed this as not planned Won't fix, can't repro, duplicate, stale Nov 20, 2024
@RMKirkpatrick
Copy link
Contributor

I do not want this issue to remain closed as "not planned", because I DO indeed think that mxStandardizeLISRELpaths()-like functionality ought to be implemented. It is not high-priority, but it ought to happen eventually.

@RMKirkpatrick RMKirkpatrick reopened this Dec 10, 2024
@mcneale
Copy link
Contributor

mcneale commented Dec 10, 2024

I think it would be good to add the math here, just for the record, and so that a user could implement it easily as an add-on function.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

4 participants