This weeks assignment provided plenty of opportunities to reflect back on the process of creating the overall app and to think about how I would proceed from this point. I felt like the instructions were a bit spartan however, and I wondered if any students in the course had not yet taken Usability 1 and 2. If they hadn't it was probably a daunting task. For me I just had to go back to what I learned in the previous courses and try to apply those methodologies to this assignment. I actually quite enjoy the usability process so I enjoyed getting back to that way of thinking.
I always struggle with creating portfolio items. Partially due to the my own reserved personality. I struggle to express in a favorable way what I did in any given assignment or in my portfolio. It's not an issue of understanding the importance, I just find it difficult to both explain why I made the decisions that I did, and to take credit for the work I've done. I never want to feel like I'm boasting and I typically get nervous when someone else is evaluating my work. I guess most of us feel that way at some point.
Regardless, it is always a good exercise to think back about a project and see both the positives aspects, and the areas in which you can improve and I think that particularly with portfolio pieces recognizing both of those areas is important, even if not stated for everyone else to see.
For this week I tried to address some of the areas where I felt the design of my app was flawed. I added a menu for selecting which child was in use. I wanted to make it more accessible throughout the entirety of the application so that you could always switch between 'users' no matter where you were in the app. In the context of a prototype with limited functionality this doesn't come out as smoothly as it would with a fully developed app. And representing it with the limited functionality available isn't as clear as it could be.
Still I feel like I've made some good progress this week in re-evaluating how the app functions and I'll have even more changes coming in the next little while.
I thought this week's assignment was interesting and meaningful. I could see as I worked through all of my screens that there sections that I had missed a few key elements in. Basically screens that would be required for the interactivity to work correctly but are essentially screens that facilitate getting from one part of the process to the end result. There are also a few places that even now will need some more work and consideration.
I've actually designed an app like this before, but I was given the task as a graphic designer and I was told the pages that it would have. In that case, there were similar issues that presented themselves. I think to some degree these types of things are a part of the nature of the process.
It was an interesting week in respects. I was in Seattle for UXPA 2016 which was very interesting with a lot of really good information and wonderful people. In regards to the assignment for the week it was interesting to see the different ways that each of our group members approached the assignment. Each had positives and areas in which they could do better so it was good to hear from the other members of the group where they thought I could improve. Some of the things they brought up I hadn't really considered so that was really helpful.
I tried to implement some of their ideas and suggestions as best I could. I'm sure that I'm in a better spot than I was previously, hopefully it's far enough along for the purposes of the assignment.
For a graphic designer like myself, wire frames are always an interesting animal. My normal process is to sketch in really low detail a concept, and refine it through a few drafts and then move to the computer to flesh out the design in a more meaningful way. Its almost like an experiment that way. Things that are working are improved and things that are not are scrapped. So the process for designing everything is similar in many ways to the concept of creating a wire frame.
As I move forward with wire frames it's always a temptation to skip forward with the design and start committing myself to a design before I should. In graphic design, as long as it looks good, there isn't really a down side to doing this. In contrast in the arena of interactive design jumping ahead before it's time can become a serious headache with hours of rework and fixes that spiral on and on. It's just a good reminder to take each step in the proper order and be patient. Something I admit is not my strength.
I think the most striking part of this week was related to user journeys. I hadn't done a user journey per se so it was a new experience. I found it much more time consuming than I had imagined. I did however find it useful in really making me connect the functionality of the application to the needs of the user, and really understand how each would likely be using the app. Obviously in a new class like this there are going to be bumps in the road, so that was no surprise, but I'm a big believer in using the right tool for the job, so I'm glad that we were allowed to use something that worked before for the context.
This weeks assignment was a continuation of last week. We continued analyzing the eye tracking videos and wrote a report on our findings. I also turned in my selling usability email.
I feel like the eye tracking critique was a helpful assignment, particularly where it came to thinking through why we use eye tracking as opposed to other methods for usability tests. I hadn't thought through that particular line of reasoning before. I also found myself asking how much we can really trust eye tracking data.
I think that understanding where a user is looking is valuable in helping you understand what they are seeing, but it seems like there is some tension between knowing what they see, and knowing how they react to what they see. Since eye tracking tasks differ from regular usability tasks, and you can't do both as effectively together, there is a definite give and take when choosing which types of tasks to assign the users.
The assignment this week discussed usability using gestures, as well as finishing creating a mobile usability study. I found creating a study difficult, mostly because of the technical issues involved testing different devices. But I think in the future that these issues will decrease until eventually there are many different options for mobile UX that are cross platform.
The questions about gestures seem more difficult to me. Particularly in regards to cultural differences in a world wide community. How do we create gestures that make sense for people of such differing backgrounds? It's a big question, without easy answers.
Adding mobile to the process of designing a usability study adds a lot of complexity to the process. There were so many different types of solutions available, some of which were only available for one device. Of the options presented, I was most drawn to UXRecorder. However, since it only works for Apple devices it was a non-starter. It's too bad they haven't expanded it to allow for Android testing, as I think it has a lot of potential.
As it is I'm structuring my test around a Tobii mobile device stand. It just seemed to be the most comprehensive solution to what I need. If I were unable to do use this, which in the real world is a scenario that is very likely to play out, I would basically try to recreate this setup using some other equipment, such as some go pro cameras and whatever screen sharing applications that I could find. There's definitely no easy solution with the currently technology solutions available.
Ok, so what I learned this week is that it's easy to overlook something when creating a study like this. I read through the assignment several times as I was creating my survey and I thought that I had covered all my bases. That was until I started doing the other students surveys and I realized that I had forgotten demographic info! Big mistake there.
Otherwise I thought pretty well. As the data keeps coming in it's obvious that there are some serious flaws with the site that I used for testing. Around 60% of participants have failed to complete the first, and certainly most important task. Hopefully the data will show why they failed to complete the task, whether it was just that they failed to navigate to the correct page, or if they had trouble adding things to their cart. Either way if they can't successfully purchase items on an ecommerce site, that's a big problem.
Where do I start with this assignment? I guess the first thing is that the idea of using Mechanical Turk as a method of recruitment tool for finding participants in a usability study was like looking at an image that seemed abstract and then realizing that you were looking at it upside down. I spent a good deal of time laboring through the MT page trying to understand just what their lingo, methods, and purposes were. To say the least I was not impressed.
While it's inevitable that Ethnio is not a perfect system, as no program ever is, it does seem to be purpose built for usability studies. MT on the other hand seems more about getting people to perform menial work, and just happens to think that somehow usability testing falls into this category. While I think I can faintly make out how you could possibly use it for testing, I don't think it would achieve the best results.
Having said that, I do feel it was a valuable exercise to take a step back and consider whether the ways methods of recruitment I had considered were effective.
Whew! That was more work than I initially gave it credit for. I thought I had an idea of what would be involved in reviewing the sessions and gathering all of the data out of them. I was wrong. It was much more intensive than I gave it credit for. I probably spent almost as much time just reviewing the moderating sessions as I expected to spend on the entirety of the project. I have a new respect for what a usability professional does.
One of the things that most stood out to me was how difficult it actually is to draw conclusions from what you see in a usability study. I guess I thought more things would be obviously wrong. Perhaps this impression originated with the readings we did which simplify things a bit to make them understandable. It seemed like everything the authors talked about was pretty straight forward and logical. And yet, after viewing each of the videos I found myself asking, now what?I'm sure they did give warnings about it and maybe I just glossed over the parts where they said it. But now, after finishing the project,
I think the thing that I am most sure of, is that I have a good deal of room for improvement in this area. Which I suppose should be expected since I'm just getting started. Still, it's a little daunting.
My experience moderating a test went nothing like I had imagined. We ended up having a major technical issue when he attempted to order, in fact as soon as we hit the site. However because I was wrapped up in the tasks and wasn't in a good location to read the screen I didn't notice it at first.
After that, my test subject struggled to find where to begin his order and I immediately I wanted to jump in and show him where to go and how to perform the task. I refrained from doing so, but it was a more challenging the longer it took. I began to feel a bit flustered and had to mental calm myself as we continued.
Eventually he made it to the ordering page and got a "technical difficulties" page and was unable to complete the portion of the test where the ordering was required. It seemed to me that he was frustrated with this and with good reason. I can't help but think that a complete failure is among the worst technical results we could encounter during a first time moderation. it totally caught me off guard. However, at least the team would now be aware of an issue with outages.
I think on the whole I did OK. About what I'd expect for a first time moderator exposed to a situation like this one. There were some things I did pretty well with, but others I should definitely work on. I need to be sure to ask good follow up questions. I tended to ask what first came to my mind, even if it wasn't a very good question. I also need to present a calm demeanor. I was nervous and I think he probably picked up on that.
To say the least it was an eye opening experience.
The first thing that really came up to me for this assignment, was why are we even doing this in the first place? It seems to me that every one of the possible quantitative measures that we discussed would be wanted for a usability study. After some consideration, I came to the conclusion that the reason for out needing to select a given measure was so that we could think more carefully about how each of the different metrics worked and could be used. This was something that I think I glossed over a bit in the reading. As a result, this exercise really ended up being useful in helping me think about whey each of the metrics, as well as many more metrics could be useful to understand different issues with our site in the context of usability.
I have to admit that prior to beginning the usability classes this semester my overall opinion of this portion of UXD was going to be the most onerous. I’m more of a creative type by nature than a number cruncher. And although I think I always will lean more in that direction, I’ve found that the usability part of UXD is fascinating, and I think I’ll find it to be quite enjoyable.
Writing a screener and task lists sounds like a pretty straightforward job. Upon starting it though, I immediately realized that it was much tougher than I gave it credit for. I struggled to come up with appropriate questions to ask the participants in the screening section of the assignment. I found that I wasn't really sure what to ask at first, or the best order to put the in once I came up with them.
As for the tasks, I also found this challenging. Once I know what I want to say, I'm usually alright at finding an intelligible way of relaying that information whether written or visually. But understanding how to communicate and move a participant through the site, guiding them to perform relevant tasks to reveal usability issues, while not asking leading questions left me feeling like I couldn't express what I wanted adequately.
Translating the expectations of the client and their objectives into a list of screening questions and tasks made me think a lot more about how this whole process of UXD works and areas that I'm going to need to get better at before I get my first UX job.
For my assignment this week I really went back and forth on which way I wanted to communicate the information. I considered a standard report and an infographic, and eventually decided to do a bit of a hybrid approach.. Hopefully this method allowed me to communicate the information in a more complete way than either of the other ways would have allowed.
It was an interesting week, in part due to the content. I hadn't really considered that there was more than one type of usability test and that they might have different purposes. Then again, I am very new to this so I have a lot to learn. It was also interesting because of personal reasons. Our family made the drive down to Mesquite Nevada for a soccer tournament. Three days of sun, swimming, and soccer games. The boys, whom I used to coach, did really well. They finished unbeaten with three wins and a tie. All in all, a pretty good week.
I hope you like it.