Mark Lester C. Lacsamana
3 min readOct 28, 2016

--

Agree to that when I mentioned mostly it’s because I have two formats here the quantitative (the survey) and the qualitative (the interview) in which I had more insights on the media provided from interviews (ergo why I used mostly).

In the process of creating this case study I already went with more why questions than how many (again changing times to 5 years ago because the quantitative data is already there, there are publications of data from the DOH etc) and did the converse of what you described. I started with a quantitative survey to understand and to have a bigger picture view of what I was looking at and see where we are then went into a deep dive on why people thought/behaved that way. This is again because of a change in the times where HIV is already in some media spotlight because of previous efforts by different organizations such as Loveyourself but what I wanted to look at what we are still lacking in and where are people slipping through the cracks, this narrative comes from looking at people’s experiences which you could only do through an interview.

An aspect I did not mention here actually was the interviewees mostly came from people who did the survey so I could interpolate the data to people who have not gotten a test, people who have gotten a test and what was their experience taking the test and to people.

We also have to go beyond the thinking that it is just to understand WHY but we need to ask people about their narratives of being acquainted with HIV related media and their experiences during testing, using the same open ended-ness we simply ask “can you tell me about your testing experience” and simply by listening (before even probing) you’d already get insights from how they re-act and how they re-tell their stories. Thereby it is more than just a WHY but a WHAT and we need to place as heavy of an importance on that qualitative insight as we did quantitative.

I still admire how data driven all campaign media has been but it also necessitates more than simple statistical analysis of the target audience but also critical analysis of will and why they respond to certain media as well as finally nothing we do should ever go without actually testing it in an audience after its been made. Even our choices of solutions based on data (let’s say demographic) still come from an assumption (e.g in todays context and for this specific topic: a target demographic is MSM or LGBT, they therefor to catch their attention we need sexual image — which is an assumption as even that demographic can be subdivided into different ways people think and react) and assumptions must be tested and continuously scrutinized if they are performing to what we need.

Another part of this as well I think we are constantly learning and evolving as our messaging evolves as well is how we properly measure if something “works”. In the case of 5 years ago because it wasn’t talked about as much our main measure was getting traction so yes that spread of awareness through virality is good. The next question though is, are we actually getting people tested (conversion and engagement). 1/2 of this is already being achieved thanks to the work that has been done (even in my small survey a pretty large chunk of the male demographic have already gotten themselves tested) but people are slipping through the cracks so we need to understand if people are actually engaging with our messages (this is where the concept of vanity metrics come into play versus more tangible metrics).

--

--

Mark Lester C. Lacsamana
Mark Lester C. Lacsamana

Written by Mark Lester C. Lacsamana

I’m a Product Designer at Kalibrr.com mumbling around UX and Design Research. Resident Party-boy of UX where I dance around queer issues in technology.

No responses yet