Constructing a logic model is not an easy task. Getting the right indicators takes a great deal of research, knowledge and mental work. Most users are busy running organizations, and do not have the time to learn the art of logic model construction, even though a sound logic model leads to better outcome measurement.
We thought we needed AI to make our framework really useful. However, we did not know the exact reasons why, or how to explore these possible pathways.
- Logic model practitioners
- Program managers
- Program staff
Throughout the impact measurement framework, user actions are really the articulation of thoughts. The stakeholder definition UI below is a sort of mechanical drawing showing how a user might select and codify stakeholders. This type of exploration solidified our move toward developing an ontology paired with machine learning recommender systems. Having the AI synthesize thoughts for the user and make recommendations made our framework much more usable. The user simply had to select the recommended option that was most accurate, or refine their search without having to use unwieldy tools.
The results were AI interactions that provided the user with vetted, appropriate choices based on the larger knowledge pool of the indicator universe. This removed much of the research and mental work from the selection process, allowing the user to more easily generate an accurate logic model.