Back to Top
feather pen

GiveGab

A Task-Oriented Summative Usability Study of the GiveGab Social Volunteering iOS App.

HCDE 417 Usability Research - Final Project, Winter 2014

About Give Gab

GiveGab is a “social network of volunteers and volunteer managers.“ The company allows users of their website and mobile application to find volunteer opportunities, track their own impact, and connect with others in their community. GiveGab is accessible through the internet as a web application and as a dedicated iOS mobile application. The features on the iOS application allow volunteers to sign up for various opportunities around them, build their own custom profile and resume, and interact with fellow volunteers and organizations.

The Challenge

Research, Plan, and Conduct a Summative Usability Study of the GiveGab iOS Social Volunteering App.

Study Scope and Goals

Study Scope and Goals This is a usability study with four participants designed to collect data using scenario based tasks, to analyze the collected and report our findings. The goal of this study is to provide actionable usability findings and recommendations on a set of the GiveGab applications features. Additionally, we hope that reporting our findings and recommendations will help GiveGab recognize opportunities to improve their users’ experiences. Our overall objectives for this study are to:

  • 1. Identify difficulties and frustrations users encounter performing tasks with different GiveGab features.
  • 2. Empathize with users feelings about the volunteer community, and the GiveGab experience.
  • 3. Gain insights into users experience with our study and task designs.
Research Questions

Research Questions Our study was designed to capture quantitative and qualitative data. Some data are metrics describing task completion, time on task, errors and satisfactions. Other data came from observations, participant comments and answers to questions plus written notes with note taker comments. Aside from task metrics we were really interested in learning about participants’ experiences with scenario based tasks we had designed to answer the following research questions:

  • 1. How do users feel about the level of intuitiveness and ease of use when interacting with the application?
  • 2. Do users feel well guided and informed as they perform tasks on GiveGab?
  • 3. What confuses the user about any of the user interface designs, navigation flow, or documentation?
  • 4. Do users feel like they can join and contribute to the GiveGab volunteer community?
  • 5. What is the level of satisfaction and enjoyment for users as they experience the application?
Our Solution

We developed a Usability Study Plan and Usability Test Kit, Assembled a Mobile Testing Lab and conducted a usability study in multiple locations over three days at the University of Washington in Seattle.

Mobile Test Lab

Our testing environment included video cameras, still cameras, and audio recording devices. Each of our team members took turns moderating, facilitating, data-logging and operating technical equipment.

The Study

Our study took place over three days on-campus at the University of Washington with four screened participants who fit our target user profile. Through this study we conducted a thorough examination of the GiveGab application through a task-based usability study and collected data using a voice recorder, two video cameras, and extensive note taking on users think aloud comments, body language, other comments and expressions. During our studies we collected metrics like time on task, task completion rate, error rates and user satisfaction. Our research specifically explored user experience satisfaction in setting up user profiles, signing up for volunteer opportunities, interacting with the interface, and volunteer time logging features.

Recruiting

Our participant screener had four criteria: 1: Participants had never used GiveGab before, 2: Participants had volunteered within the last three months, 3: Participants used smartphones, 4: Participants were active social media users. Once participants were chosen, we scheduled study sessions with them. We were careful to send reminders and had back-up participants available.

At The Study Sessions

At the testing site, participants were introduced to our team and procedures, provided with detailed instructions, signed consent forms, answered questionnaires and performed tasks. Questionnaires were divided into four categories: pre-test, post-test, post-task and post-study.

Study Summary

Using the data we collected and the feedback comments from our participants we derived several findings. General consensus indicated that the GiveGab iOS application is a well-designed application with an attractive user interface and our participants liked the features that GiveGab gave them. However, we found that there were shortcoming in several areas. Our users were not confident in the opportunity sign up process and found several usability confusions when customizing their profile. They also didn’t think that the social news feed did enough to attract their use and attention. Based on these findings we recommend to add a better sign up system with more visible confirmations, a setting dedicated to editing the profile, and the ability to filter relevant elements in the GiveGab news feed.

My Roles

My roles across this project shifted as we moved through different phases of development. I played the part of agile collaborative strategist, user researcher and data analyst, empathizing storywriter, user journey mapper, brainstorming ideation machine, sketcher, prototyper, wireframer, usability researcher and presenter.

Our Team

Gab Monkey's are a team of four undergraduate students from the Department of Human-Centered Design and Engineering at the University of Washington in Seattle, Washington: Mark Stamnes, Brian Vergara, Chong Wang and Ethan Zheng.

Pic
Mark Stamnes
Pic
Brian Vergara
Pic
Chong Wang
Pic
Ethan Zheng
Qualitative User Statements By Task
Sign-Up Process
  • “There is no visual confirmation, I'm confused.“
  • “I'm not sure when or what I signed up for.“
  • “It shouldn’t be just sign up and opt out, the instructions should be more specific.“
Adding Volunteering Interests
  • “Adding interests is not easy to find.“
  • “Where’s the button to add more interests?“
  • “It’s not intuitive to press on the interests to add more.“
Connecting With Others
  • “Where are these people coming from?“
  • “Are these local people?“
  • “I want to see people’s interests in my gab wall, under their post.“
Statistics

Completion Rate, Error Rate, Time on Task and Satisfaction Rate data were statistically analyzed for trends including mean, deviation, variance and confidence intervals.

image of statistical analysis of time on task includes mean, variance, deviation and confidence interval
Methods

Before we conducted our actual study, we developed an extensive Usability Test Plan. The test plan addresses our usability study purpose, goals, and questions in great details and the approach that we followed for the usability testing. From our plan we built a usability study kit that includes facilitation scripts, screening questionnaires, pre-test and post-test questionnaires, and data-logging forms. We began by publishing an initial call for participants’ survey and screener, but had to draw from friends to be participants in the end due to time constraints. We did use our screener on each of our participants and made sure that they all passed our screener criteria.

We conducted our usability study at three different library study rooms on the campus of University of Washington. The first two sessions were in a study room in Odegaard library; the second one is Foster library, and the third session happened in a different study room in Odegaard library. Four participants with recent volunteer experience were recruited for our usability study, and none of them has used GiveGab before. Each session lasted 45 minutes, and each participant was given a set of five scenario-based questions that a new user will have to go through in the same order using the GiveGab application. Both qualitative data, like comments, facial expression, and quantitative data, like ratings, and time spent for each task, were collected during each session for analysis.

Gab Monkeys Final Report
Study Scope and Goals

This is a usability study with four participants designed to collect data using scenario based tasks, to analyze the collected and report our findings. The goal of this study is to provide actionable usability findings and recommendations on a set of the GiveGab applications features. Additionally, we hope that reporting our findings and recommendations will help GiveGab recognize opportunities to improve their users’ experiences. Our overall objectives for this study are to:

Methods and Approach

Before we conducted our actual study, we developed an extensive Usability Test Plan. The test plan addresses our usability study purpose, goals, and questions in great details and the approach that we followed for the usability testing. From our plan we built a usability study kit that includes facilitation scripts, screening questionnaires, pre-test and post-test questionnaires, and data-logging forms.

We began by publishing an initial call for participants’ survey and screener, but had to draw from friends to be participants in the end due to time constraints. We did use our screener on each of our participants and made sure that they all passed our screener criteria.

We conducted our usability study at three different library study rooms on the campus of University of Washington. The first two sessions were in a study room in Odegaard library; the second one is Foster library, and the third session happened in a different study room in Odegaard library. Four participants with recent volunteer experience were recruited for our usability study, and none of them has used GiveGab before. Each session lasted 45 minutes, and each participant was given a set of five scenario-based questions that a new user will have to go through in the same order using the GiveGab application.

Both qualitative data, like comments, facial expression, and quantitative data, like ratings, and time spent for each task, were collected during each session for analysis.

In each test session, we followed the same procedures:

  • 1. Setup test environment
  • 2. Welcome participants
  • 3. Introduction to the test
  • 4. Helped them become familiar with the equipment in the room
  • 5. Running the actual study 6. Data logging, recording and debriefing
Setup test environment.

Our team will arrive 20-30 minutes earlier than the actual start time for the usability session to allow enough time for us to set up our equipment and the room. Despite the fact that we are using three different rooms for our four usability study sessions, we were able to maintain the consistency of our test environment.

Welcome participants.

As test participant arrived for the session, the facilitator gave thanks and introduced every team member and their roles for the test. Next the facilitator explained what each of the pieces of equipment are for including both video cameras, the voice recorder and the testing device. Refreshments were provided to the participant by the facilitator.

Introduction.

In each test session, the facilitator used the facilitation script to introduce the test to participant. During the introduction part, facilitator also read the consent form and provided a copy to participant. After participants signed the consent forms and gave permissions to start the test, facilitator would start the actual test with participants. Then, participants were asked to answer the pre-test questionnaire before they began the actual tasks.

Data logging

During the test sessions, there was a note taker recording participants’ answers to questions that facilitator asked throughout the test session. While the observer took notes on the participants’ actions, and their verbal comments. The technician would started the digital recorder as the test began, and constantly checked the camera and video camera to make sure they were in right position to record participants’ interaction with the test phone, as well as their behaviors. After each task, the participants were asked to give a rating of easiness of the task and answer several task-related questions. And after participants finished all the tasks, they were asked to answer a post-test questionnaire and a debriefing questionnaire. Then the whole team gave thanks to participants and closed the test session.

Test Environment and Set-Up

All of our usability study sessions were conducted in rooms that were big enough for five or more people. We chose rooms in quiet areas, like third floor in Odegaard library, and Foster library to eliminate distractions and outside noise for our participants. For each session, a facilitator worked with each participant, walking them through the whole test session. The main observer and note taker sat opposite the participant but close enough to observe body language and facial expressions. The second note taker and technician were sitting across and near the corner of the room.

The participant was provided with a test iTouch iOS device pre-installed with the GiveGab application to use during the study. Our team also had a spare iPhone 5S installed with GiveGab as a backup. We setup a camera on a tripod on the table close to the participants to take video recordings of the iTouch screen as participants were interacting with the application. In the corner opposite but facing the participant we set up a larger video camera that recorded the study and with digital zoom allows us the opportunity for playback zoomed in on participants head and body. A digital voice recorder was used during each session start to finish. The note taker used a smartphone as a digital timer to time the start and end time for each task the participant went through. All of the observers, note takers, and technicians had laptops and we actively wrote notes using Google Docs – a Computer Supported Collaborative Workspace.

Usability Testing Roles

There are four roles in each testing session:

  • A facilitator that is responsible to interact with the participants, and provide task cards as participants go through the list of task
  • An observer that is responsible to observe and record participants’ reactions as well as what participants will say as they “think out loud“
  • A note taker that will time participant for each task and take notes of participants’ answers and comments to questions that facilitator addresses
  • A technician that should make sure every device including camera, video camera, digital recorder, test phone, and spare test phone are in perfect working condition. And the technician should also set up the devices in the room the same way for every test sessions.
  • Data Analysis Methods

    “One of the keys to helping teams improve designs through usability is making results actionable (Nielsen).“In order for our team to generate cogent and actionable information we needed to organize and reduce our collection of quantitative and qualitative data. We began by ordering our data collection into four categories using the top-down method: pre-test, post-test, post-task and post-study. Post-task data are ordered by task then by participant. All other data are ordered by category then by participant.

    We kept good logs of participants’ comments and created log sheets using a spreadsheet program from our manual data logging sheets. Qualitative data were analyzed by carefully correlating observational data with users think aloud comments and answers to questions. As we organized and correlated our qualitative data, we began to see patterns and notice trends.

    Our quantitative data were collated and entered into a spreadsheet by user then by task. Our spreadsheet includes four types of quantitative data: Completion Rate, Error Rate, Time on Task and Satisfaction Rate. Data were analyzed for trends, means, variance, deviation and confidence intervals. We do recognize that biases may exist in the Likert scale user satisfaction ratings due to the error of central tendency and potentially for user biased “rate consistency (Barnum) in their responses.

    Participants
    Screener Results Summary

    Our participant pool consisted of four screened UW students who use smartphones, who are active social media users, who have volunteered within the previous three months and who were not familiar with the GiveGab application. Our initial published screening survey did not return enough eligible candidates in a reasonable timeframe so we then reached out to our University peers. All the participants are the students studying in the University of Washington. Three of them are undergraduates and one is graduate student. The table below provides their information.

    Participant 1 Participant 2 Participant 3 Participant 4 Age 21 21 28 21 Gender Male Female Male Female On-going Degree Bachelor Bachelor Master Bachelor Major Pre-medicine Computer Science Business Electrical Engineering Table – Participant Summary Statistics

    Participant Profile Characteristics

    UW Students UW students are readily available to us, are accessible in this forum without the need for the Institutional Review Board and as we discovered, easily fit our participant requirements.

    Smartphone Users Because GiveGab is an iOS application, we felt smartphone users would generally be more inclined to be able to naturally use the GiveGab application without the need for costly smartphone use instructions.

    Active social media users. Anyone who has and uses a Facebook, Instagram, Twitter, Pinterest, Tumblr, LinkedIn, etc. account will be considered a user of social media. People who both visit sites and applications like these online or through mobile applications are considered part of this group.

    Volunteered within the previous three months. We don’t necessarily want our participants to be regular, every-week volunteers, but we want our participants to be familiar with what it means to search, sign up, and attend a volunteering opportunity. People fitting this group are those who enjoy helping out in their communities and preferably have local volunteering experience. This group also includes individuals who have community service obligations (i.e. court ordered, work required or otherwise) for organizations, schools, or work and who search their communities for opportunities to gain “volunteer hours”.

    Have not heard of GiveGab If our participants are unfamiliar with GiveGab they will have no predisposed notions are opinions about the application. Since we are seeking their honest opinion and feedback, it is better for them to have no prior thoughts about GiveGab prior to the study. We want participants who are familiar with the process of finding volunteering opportunities, but not by using this platform. If they are presented with it fresh and new during our study, it will give them a blank slate to draw truly honest opinions and reactions.

    Pre-Test Question Summaries and Table Before each study, participants were each asked six pre-test questions designed to ascertain their previous exposure to GiveGab and their relatively recent (three month previous) volunteer experience. A seventh question was delivered as a prompt for participants to imagine a mobile application for volunteers with all the features and functions they would need. They were then asked what the most important feature would be in their minds.

    Question 1 helped us reaffirm one of our participant criteria that all participants have never heard of GiveGab before.

    Question 2 gave us an idea of how much time on average our participants had volunteered in the previous three months. Their times ranged from 3 hours to 30 hours with an average of 11.25 hours.

    Question 3 provided insight into the methods our participants used to schedule volunteer activities. Two participants schedule through club events, one uses the King County Volunteer website finally one just showed up to a volunteer event, signed a form and they let him in. None of them used an iOS mobile application.

    Question 4 provided insight into participants perceived ‘most difficult part’ in finding and scheduling volunteer sessions. Overall participants’ answers indicate that lack of information is the most difficult part. Participants report these difficulties: knowing requirements, locations, scheduling, type of volunteering activities available and contact information.

    Question 5 asked if participants volunteered for the same organization regularly. All participants indicated that they do volunteer for the same organization regularly however the language of this question may be flawed.

    P1 answered this question using the term organization in a Meta sense stating “hospitals”, rather than as a discrete entity like “Harborview Medical Center.” P1 indicated that P1 had volunteered for two volunteer organizations, both are hospitals. Their answers otherwise suggest that they appear to prefer organizational volunteer activity consistency.

    Question 6 gave us information about participants’ behavior sharing their volunteer experiences across various social media platforms. Two participants report posting pictures, one on Instagram, and the other on Facebook. One participant reports using text on social media and one participant reports they do not “really advertise.” A topic for further research could involve investigating reasoning behind people’s choices to share volunteer information on social media.

    Question 7 was a prompt for participants to envision mobile application for volunteering with all the features and functionality they need and to tell us what feature or function was the most important.

    Pre-Test Questions Data Summary Table Heard of GiveGab P1: No P2: No P3: No P4: No

    Time Volunteering Previous Three Months P1: 4 Hrs. Wk. in Summer P2: 3 Hours P3: 20-30 Hours P4: 8 Hours on 1 Day Schedule Volunteering P1: Just showed up, and sign a volunteer sheet, and they let me in P2: Use websites (King County Website for specific volunteering) P3: C4C club member, club organizes volunteer opportunities P4: Club events Most difficult part of finding and scheduling a volunteer session P1: Don’t know if they want you, like their requirements. Sometimes they will reject, sometimes will approve. P2: Not one location P3: Everyone is busy, so it’s hard to find a time for everyone, scheduling is hard P4: Contacting the place, when you don’t have an idea of the kind of work you want to do you have to look for it yourself volunteer for a same organization regularly P1: Yeah, worked for two volunteer org. Both are hospitals, one in China, one in Miami. P2: Yeah, because I know where to go, and I enjoy doing volunteer with them, so just go back to the same organization P3: Yes. C4C. Challenge for charity P4: Yes Share your volunteer experience with others? P1: Yes, Posted picture at Instagram P2: yes. Facebook though picture P3: with other club members/ kids in the club/ everyone in the events, talk to them (face to face) / also social media (textual) P4: not really, tell people sometimes but don’t really advertise envision a mobile app for volunteers P1: Maybe there can be a talk box, can check with volunteer facilitator. Having some conversations with them. P2: Every event and locations for volunteer opportunities. P3: need to have all the information and details so he knows if he wants to go and if they need me P4: probably being able to track location and from there find local events and opportunities

    12 Findings and Recommendations Test Completion Results Gab Monkeys built a table of a set of usability issues and rated their severity according to the following table: 1 – Prevents completion of task 2 – Caused participant frustration and delay on task 3 – Minor issue 4 – Indicates a preference or suggestion

    This table is the result of the analysis of our test results. The recommendations are cursory and we feel that we would want the opportunity to expand upon our findings before submission to GiveGab. This is of course the first Final Usability Study Report any of us have ever written.

    Finding Recommendation Severity Rating Users had trouble with login procedure Social Media login implementation 2 No automatic login after registration Recommend no change 4 Poor category organization Consider information architecture analysis 3 "My Interests" icon is not easy to find Rebuild the My Interests Icon to stand out 4 Volunteer descriptions are unattractive Volunteer descriptions should be more displayed more attractively 4 No feedback after signing up Better onscreen feedback after signing up 2 No feedback after signing up Provide email feedback after signing up for a volunteer opportunity 2 User reports unfamiliarity with the term "pro bono" Avoid language that is too specific for a general audience to understand 3 Volunteer log input form does not provide feedback, options or search ability. Incorporate a search feature accessible across the application that searches first based on the context of the users activity then performs a more general search if noting with high enough relativity rating is found 2

    Findings Discussions Users had trouble with login procedure. Severity Rating 2. Several users reported difficulty with the initial signup procedure. They indicated they would prefer a social media login implementation.

    No automatic login after registration. Severity Rating 4. It’s a matter of convenience that users be automatically logged into an application they have just signed up for. There may be unidentified issues relating to security if automatic login is standard. Conversely if a person is not automatically logged in then they may accidently lock themselves out of their account if the forget their password and will have to retrieve it.

    Poor category organization. Severity Rating 4. Users will inevitably come along that fit into categories that are not listed either because there are no related volunteer opportunities or GiveGab have not chosen to put them in. An example is Auction. We can only speculate as to why there is no Auction category. This is an inconvenience to users who want categories that are more in line with their personal preferences.

    Volunteer descriptions are unattractive. Severity Rating 4. One user felt that the visual aesthetics of the volunteer descriptions was unattractive. Personal preference issue although poor screen layout can cause confusion and frustration.

    No feedback after signing up. Severity Rating 2. Users were often confused when they signed up for volunteering opportunities and there was no confirmation, acknowledgement or confirmation email. This frustrated users who would often wait for a response that never came.

    User reports unfamiliarity with the term "pro bono". Severity Rating 3. This is a minor career related lexical issue. The term “pro bono” will be familiar to many members of certain vocational classes i.e. attorneys. Those unfamiliar with the vernacular of a more focused field may find the language turns them off.

    Volunteer log input form does not provide feedback, options or search ability. Severity Rating 2. Users were slowed down by the lack of feedback using the log input form. Users also complained that the log form should have a search function to locate specific organizations they had volunteered for to log hours. This may not be the standard practice however this still caused frustration.

    Post-task questionnaire data summary

    Post-task data are presented in a summarized form and in a table following the summaries. Initial findings are listed with the summaries however they do factor into reported findings and recommendations. Below are a graph and table showing the time on task data followed by a task completion rate table. Finally there are detailed analysis with summaries, tables, findings and recommendations for each task.

    For each task participant performed, the data logger recorded the time participants spent on the task, and the task completion results. During the test, participants can take as long as they felt necessary to finish a task, but we asked them to give clear statement like “I am done” to indicate a completion of the task. Participant can also indicate that they want to abandon a task or do not want to continue. All the tasks clearly indicated as “completed” by participant will be recorded as completed tasks, while tasks failed, abandoned, or uncompleted (but indicated as “completed” by participants) will be recorded as uncompleted. The following chart and table show task statistics derived from time on task. Numbers on the end of the bars are in seconds. The data are shown below the chart in a table. There are definite outliers. Task-1 was open ended so the times are not indicative of constrained task performance. Task 2 shows an outlier that we believe was from the participant being confused about the action of a “sign up” button (see Task-3 below).

    Likert scale ratings of participants’ difficulty with each task. 1 = difficult, 5 = easy. We acknowledge the data from Task-3 and Task-4 have holes and therefore are potentially unreliable but throughout this report are included and analyzed. 15 Task Analysis, findings and recommendations

    Task 1- Sign up for a GiveGab account. P1, P2 & P3 had password concerns. P1 & P3 felt they should be able to log in using existing social media credentials. They felt that having to remember a password on a device that was not theirs was a negative experience. P2 felt that having to login again after registration is a negative experience. The sentiment of P2 is not correlated with the users’ satisfaction rating however sentiment is correlated with users P1 and P3. Recommendation: Social media login implementation. We do not recommend automatic login after registration however. Having users re-log in makes them verify their credentials immediately and leaves them in a less vulnerable state.

    Additional information obtained from Task 1 post-task questions provide positive sentiment indicating participants are encouraged to continue to explore the application on deeper levels. Since Task 1 had an open ended component, when asked to describe features they remembered, their answers were relative to the time they took to explore the app. P3’s results may be biased because P3 explored too long and learned about the UI to the point of being able to duplicate what P3 had learned on a later task.

    Task 1 Did you encounter any difficulties signing up as a GiveGab user? P1: Typing password. P2: Should not have to re-log in after registration. P3: FB linking would make registration easier, i.e. FB already active on participants’ phone P4: No First Impressions P1: It’s cool, many features, there is a social network thing P2: Clean, know what icons mean, seeing what volunteering opts around P3: App is for young people because of the colors used, app designed for proactive users P4: Really like the layout of everything, well designed, it’s very clean and straightforward for the most part about what to do Can you describe the features you found for me? P1: Skills, causes, share volunteer experiences, find volunteer opportunities nearby P2: Seeing what volunteering options are around, hours, pictures P3: Discover, seek, post photos, thinks it’s about one specific event P4: Own personal profiles, say what you are good at, place where you're supposed to find opportunities, but it keeps crashing, news feed thing to see other people and what they've done, place to make goals and track progress

    Task-2 asks users to edit their profile to include their background in art and design. First participants are asked about the categories in the “My Interests” section. Then they are asked how they feel about the choices and finally they are asked about any difficulties. All participants agreed that the categories were pretty good or good. P4 made the statement that “I was able to find mine so it’s a good list”. What’s interesting though is that 3 out of 4 feel that they want more choices while P4 seemed a bit more satisfied commenting that the category choices are “Good choices, I was able to find good ones, there’s ones that fit me and will fit others.” Participants did report difficulties however. P1 and P2 think that the “My Interests” icon is not easy to find. P2’s time on task was an outlier but significant at 275 seconds. The other participants’ time on task ranged from 35 to 61 seconds. P4 misinterpreted interests for skills and didn’t understand that “art and design” was one choice. This indicates to us that we could have designed the task better by clarifying that “art and design” is an interest area.

    Task 2 What do you think about the categories available in the “My Interests” section? P1: It’s good, extensive P2: I wish the skill descriptions can be customized P3: Good, would like to see categories more organized i.e. arts and design, or communication, or talker, or gardener P4: Pretty good list, hits all the different interests that people have, I was able to find mine so it’s a good list How do you feel about choices available? P1:Wants more choices P2: Okay ranges P3: Need more categories, based on personal experience, the volunteer job for auction, does not see any related category user was expecting P4: Good choices, I was able to find good ones, there’s ones that fit me and will fit others Difficulty understanding anything? P1: Small "My Interests" icon, not easy to find P2: Small icons, not easy to find. People might not know which ones to click P3: No P4: Misinterpreted the “interests” vs. “skills” and stuff, didn't know “art and design” was one specific choice

    Task-3 – ** Note Data in Task-3 are without contributions from P4. P4’s application kept crashing. The idea here is for participants to search for volunteer opportunities Seattle Art Museum (SAM) because they had volunteered there before. P1, P2 and 3 found the correct information. P2 commented that the aesthetics can be improved. This is a subjective position. More research would be needed to ask if there was indeed a need for aesthetic improvement of the volunteer sign-up area for SAM. P3 was not confident that the “sign up” button supported confirming P3’s desire to sign up for that specific volunteer opportunity. P2 and P3 agree that the process is unclear, P2 wished there was a search function and P3 used logic to deduce the correct organization to choose i.e. SAM = Seattle Art Museum. We would recommend revisiting this task with a larger sample size and the application fully functioning before drawing conclusions.

    Task 3 What information did you find? P1: Found description, contact information, education and skills required P2: A list of all the volunteer opportunities, people who volunteered there, info about the place, how to contact P3: Volunteering opportunities and the positions + descriptions, positions, locations, contact name, email box, telephone number P4: Skipped task, application repeatedly crashes Do you find information you need? P1: Yes P2: Yes, but it can be presented in a more attractive way. It’s all plain text, normally I wouldn’t read all of them. I wish it can be designed more visually appealing. P3: Kind of, sees different positions in volunteer opportunities, 12 positions, does not know what position signed up for, maybe only wants to be ambassador doesn’t know which position the “sign up” button is referring to P4: Skipped Task, application repeatedly crashes How do you feel about the “discover” feature. P1: Everything is there, so much to explore P2: Not sure participant likes the feature, wish there was a search function P3: It's OK, is not that clear, knows SAM and knows to try Seattle Art Museum P4: Skipped Task, application repeatedly crashes

    Task-4 – ** Note Data in Task-3 are without contributions from P4. P4’s application kept crashing. Here participants are tasked with signing up for a volunteer opportunity at SAM. The general consensus here is that users were not entirely sure that they had actually signed up or not. Additionally there was not much enthusiasm to share the act of signing up to volunteer. Participants provided reasonable logic supporting their choice to not share or share in a limited fashion. P1 would be embarrassed if the P1 did not get the volunteering opportunity. P2 was more reserved stating that “If I really care about the cause, I will (post). If it’s a cool cause”. P3 took the thought exercise deeper exclaiming P3 would post “Maybe sometimes if there was something special or really interesting or my friends or connections would want to know then would share.” The most glaring finding in Task-4 is that some users did not feel confident that they had signed up to volunteer. Gab Monkeys recommend GiveGab implement a better post-activity feedback system including on-screen confirmation that a user had indeed signed up for an opportunity and user choice confirmation sent by SMS, text or email.

    Task 4 Do you feel confident that you successfully signed up for the volunteer opportunity? P1: Yes, pretty confident, tell him if he signed up P2: No. Not sure what when she signed up for, not sure when or what signed up for, all it says is to opt out P3: Not very confident, there is no confirmation, expecting email confirmation with date, time, location, email address, there is no visual confirmation on the app, all it says is “opt out” which is still confusing P4: Skipped Task, application repeatedly crashes Would you use the sharing function if you used the app in real life? P1: No P2: No P3: Maybe P4: Skipped Task, application repeatedly crashes Why / Why not? P1: Really embarrassed if he did not actually get the volunteering opportunity P2: If I really care about the cause, I will. If it’s a cool cause P3: Maybe sometimes if there was something special or really interesting or my friends or connections would want to know then would share P4: Skipped Task, application repeatedly crashes 19 Task-5 was an exercise for our team to log data from onscreen observations. Participants were asked to log a 3 hour volunteer session at the Seattle Art museum. Our team observed users in rotation so each of us were able to watch users during the task. All users were able to locate the link to log hours. P1 reported the link (button) was not easy to find. P2 discovered a different navigation path reaching the volunteer logging activity screen from the Home Page. Interestingly, time on task data for P2 is an outlier. P2’s choice to navigate from the homepage may be the shortest path. P3 knew immediately where to go supporting our earlier claim that P3 had learned the location of the link during the Task-1 open ended application exploration prompt. P4 found the link with no other reported information. We recommend revision of this task in future usability studies. Our team may have been unaware of the faster path. Additionally we could consider the data from P2 flawed and from P3 as biased. P2 data indicate that P2 took an unknown path although the act of going to a home page for reorientation and then further navigation is probably common among participants in our target profile.

    Task 5 (Observations) Able to locate link? P1: Yes, it took time for participants to find; Comment (Not easy to find button) P2: Yes, participant navigated from a button on homepage P3: Yes, participant knew where to go, indicates participant learned location during Task 1 exploration P4: Yes Form intuitive? P1: No P2: Yes, step by step information is available, not a form so don’t have to look at everything at once P3: Yes, just follow instructions, participant felt well guided P4: Yes, page by page is what you do, logging stuff Form confusing? P1: No P2: I don’t think so, because SAM was out there. I feel guided through. P3: The term "pro bono" P4: Yes, the location because nothing popped up, it didn't give any options and the search didn't return anything

    Post-test Questionnaire Data Summary and Table The post-test study helped clarify participants’ immediate reactions to the study they had just experienced. While full analysis of this data is warranted, our team is running out of time. I will touch on the highlights of the Post-test questionnaire. Participants generally expressed positive sentiment about the study. They generally thought tasks were easy, typical, straight forward and that Gab Monkey wants to discover problems with the application. When asked about the types of features participants were neutral like P3 who stated “The two major features are social media and discover opportunities” or positive. Participants reported two answers about the speed of study. P1 reported the study was “fast”, the remaining participants’ report the study happened at a “P2: good pace”, “P3: Does not feel rushed” and “P4: Speed was fine.” All participants were positive about enjoying the study. When asked about frustrations, participants responses echoed earlier findings that one-step social media login to sign-up would make the process of signing up a lot easier. Users P2 and P4 reported difficulty finding the opportunities. Unfortunately as previously stated, P4 had issues with the application crashing.

    Before the study, participants were asked to think about which feature of a fully featured volunteer application would be the most important to them. We recognize participants’ answers to this question are influenced by their experience going through the study. The answers however are interesting. From participants’ responses, we recommend further studies on the following topics: Show skills needed for particular volunteer opportunities so users can tailor their profiles accordingly. Incorporate geolocation related volunteer opportunities.

    The ending questions were designed to garner sentiment about using GiveGab in the future and recommending the application to others. Half would consider using GiveGab in the future and 75% would recommend GiveGab to someone else.

    Initial thoughts about this study P1: Simple and fast, a lot of equipment which is cool P2: I think it is cool to look through this app, having people go through the real task is great. P3: Good for us to figure out what we are thinking, what users are thinking. P4: It was fine, there were a lot of questions to get to the answers How do you feel about the tasks you performed P1: Um, most of them are very easy and straightforward. I feel good. P2: Typical things people need to do to use the app. Good. SAM task is kind of interesting P3: Thinks Gab Monkeys wants to discover problems with the app and figure out what app looks like to users (maybe out of context) P4: Straightforward, not too difficult What do you think about the types of features covered in this study? P1: Covered very specific features, though not all the features, very good enough to find the volunteer opt P2: Good range of features, accidental clicking of the middle button led me to the log hours screen, UI is easy to navigate P3: The two major features are social media and discover opportunities. P4: Asked me to do various of things, which is good How do you feel about the speed of the study? P1: Fast. P2: Good pace. try to make it longer to add some comments P3: Does not feel rushed, appropriate P4: Speed was fine Did you enjoy this study? P1: Yes, cool app, everything is fast P2: I think it’s a cool idea P3: Yes, thinking aloud is good (first time for participant) P4: Yeah, sure, it has to do with apps on phones and I like to look at that stuff Did you experience frustration with any of the tasks that you can specifically remember? P1: Maybe not. The only one was during the sign-up step, but maybe that’s my bad. P2: Yes. Finding the opportunities. P3: Maybe frustration around having to type in FB login info P4: Discovering opportunities finding places kept crashing Before this test, you were asked to envision a mobile P1: Specify the skills that he needs before using the feature to find the opportunity P2: Geolocation related volunteer opportunities

    app for volunteers that has all the functionalities/features you felt you would need and to think about which is the most important function/feature in your mind. Of these features, which ones did GiveGab address well? P3: Clear contact information for volunteer opportunities P4: Being able to find volunteer opportunities What features/functions would you like to see added or taken away from GiveGab? P1: No suggestions P2: Calendar scheduling for volunteer opportunities, volunteer opportunity newsfeeds, see updated volunteer list based on interests or tags P3: Add ability to specify skillsets for specific organizations participant is interested in P4: Liked profile ‘thing’, not fond of news feed from people all over the country If you could make a significant change to GiveGab, what change would you make? P1: No changes P2: Shows peoples interests under their posts. Sign up button should be sign up date and position. In description, they tied up org to this app, should be more specific. P3: Link to FB account. FB analytics to harvest locations, activities, friends, maybe provide a more customized experience, so far only sees one opportunity via zip code P4: Link to FB account. Same as P3 Would you consider using GiveGab in the future? P1: Yes P2: Maybe P3: Depends on entire user experience of a complete volunteer cycle P4: Yes Would you like to recommend GiveGab to other people? P1: Yes P2: Yes P3: Maybe P4: Yes Do you have any additional comments about your experience using GiveGab? P1: No P2: No P3: Good App. P4: No

    Post-Study questionnaire data summary Post-Study data were collected to gauge participants experience with the study overall. Likeability of the study averaged 3. Participants reported nothing extraordinary with the exception of P2 stating that “I am interested knowing when I should say “I am done”, sometimes I am not sure if I am done with a task.” Definitely something we will take with us into our futures when performing usability studies.

    How would you rate your overall experience with this usability study? P1: More than liked P2: Liked P3: Somewhat Liked P4: Liked Do you have any thoughts about the usability study? P1: no P2: I am interested knowing when I should say “I am done”, sometimes I am not sure if I am done with a task. P3: Smooth, going well, first time trying P4: Went as I expected, it was fine Do you have any comments on how we can improve this study experience in the future? P1: You did a good job, there is nothing I can say for suggestions. P2: No. It was good. P3: Good P4: No, nothing stood out

    Limitations The four participants that we conducted our usability study with fit our participant profile pretty well. All had recently volunteering experience and were well versed in the process it took to find and schedule their opportunities. In addition, all of our participants were smartphone users who were active social media users. Because we created a user profile that could more easily manage a mobile application and understood more of what the volunteer process looked like, we were unable to truly get the perspective of someone less experienced. Having a participant who is unfamiliar with the volunteer process would offer us a view on how GiveGab teaches and provides direction in the process of looking for opportunities and even understanding some of the terminology.

    Other limitations included time constraints and decisions made on convenience. We didn’t want to detract participants from taking part in our study so we limited our sessions to 45 minutes. If we had planned for my time, we could have performed more tasks and collected more data.

    Next Steps Additional Areas to Explore This usability study was limited to the iOS application of GiveGab. GiveGab also uses a website that offers users the same features of the volunteer social network. It would be interesting and informative to conduct a study or consult external work on the GiveGab website to determine similar usability issues or success. It would be insightful to compare how users use a website differently from a mobile device and whether or not it yields similar results to tasks.

    In addition, there are features on the GiveGab iOS application that we did not cover in our tasks due to time constraints and the focus of our research questions. Additional testing into these features (such as tasks associated with the news feed and social interactions) would yield even more useful information. Conducting the study with a different participant profile could also yield additional information and data. (See “Limitations” section’’)

    Reporting A high level report was presented to our peers and instructors in a twelve minute timeframe. This presentation was given in a slideshow format and lightly touched on the following topics:

    GiveGab Overview Usability Problem and Test Goals Methods and Procedures Study Participants Data Collection General Trends Findings and Recommendations

    Because of the time limit of the presentation most of the information covered in the presentation was brief. A next step would be to make a more in-depth slideshow presentation to present to the client. This presentation would be more comprehensive and cover additional details as described in this document. We would show more of the quantitative data that we have compiled and summarize more comments and recommendations for additional findings. This document would essentially act as an outline for a slideshow presentation for the GiveGab clients.

    Study Improvements We will also detail suggestions for improvements on the usability study. If GiveGab even wants additional usability testing done for their products, the conductors of the study can learn from our limitations. Running a second pilot test: running a second pilot test would allow us to familiarize ourselves more with the tasks and the potential problems that could arise. It would also give us more practice and confidence when facilitating the session. In addition, we could pick up on things to look out for during our actual sessions.

    Eliminating open ended tasks: one of our tasks was a little subjective and we found that participants would end up finding the solutions to subsequent tasks during this task. Obviously, this would provide skewed data on the following tasks since the users were already somewhat familiar with what they needed to do. By eliminating these open ended, subjective tasks at the beginning, we can get better and more accurate data.