Sametrica had a first version logic model builder that was not meeting expectations, and was built in older technology. The team had been getting feedback from customers and brainstorming on their own about what would make a great logic model builder. We did a lot of whiteboard work, synthesis of ideas and epic planning. Design was underway, but the pace quickened when we won a round of innovation funding from the Government of Canada to build out this app and the AI we expected it would utilize.
Since we now had immediate technical deliverable dates, testing our ideas had to be expedient and as “real” as possible.
- Logic model practitioners
- Program managers
- Program staff
Getting paper to wireframe
Economists and mathematicians are really fascinating stakeholders, but they think very differently from product people. I find the best way to get started is to take ideas out of heads and put them on paper. Once concepts are translated from sketch to wireframe, then the testing of assumptions can begin in earnest. We can’t design together until diverse people on the team have processed ideas and we are on the same page about the definition of what we are designing.
Prototype 1 – CodePen
The construction of a logic model can vary tremendously, there is no standard formula. However, they usually have a similar set of components. We needed to find out if different practitioners could use our system to construct a logic model? At the same time, we needed to test our hunches about where we should insert AI.
Logic model practitioners are few and far between with huge geographic dispersion. Since we could not afford in person visits, an online prototype seemed like the best way to garner feedback. Using CodePen we got a prototype up and running.
Talking with people about this prototype underscored some major issues around how complex our product needed to be. We quickly added a CodePen of our calculations modal, and listened as testers described how they construct calculations.
Prototype 2 – CodePen
The second prototype incorporated the feedback we got from the first one. Plus we added more throughput so the tester could actually add realistic data. As people structured real logic models in the prototype, it was getting very cluttered. The need for AI in multiple seen and unforeseen touchpoints was becoming very evident.
Experience Map – Narrowing the focus
Time was getting tight on our Government of Canada code base deliverable. To help us narrow the focus and get a usable MVP from our prototype research we had testers map out their experience. We broke the system down into essential touchpoints. For each touchpoint the testers noted their emotions, peak/valley, thoughts and ideas.
Invision – Working out the kinks in the UI
The experience map showed us specific spots in the UI where testers were getting stuck, or in some cases had latent needs we had not foreseen, but were essential for MVP. We used Sketch and Invision to make a very realistic prototype of UI fixes gleaned from the experience mapping feedback. We needed to make sure that our UI was going to be intuitive to use before we committed it to code. This exercise allayed our concerns and we were ready to write the MVP stories.