Monday, October 24, 2016
Charles Di Renzo Week 2 Reading Reactions COLLAPSE I liked the Wickens reading the most out of these 3. It covered the different types of experiments and under which conditions to use them. This was very fascinating to me as I love the science involved with HCI. I spoke about it in my last post, but I’m always thinking about whether or not a product can be ‘objectively good’ and if so will that mean the most products are headed down a road to imitate one another in the hopes of the being the best product on the market? I think we all feel that products can be ‘objectively bad’ because there are products that are either hard to use or they don’t accomplish their purpose very well, but can a product be deemed ‘objectively good’ if there are people who would prefer a different interface? Either way I enjoy hearing about the science and research that goes into everything. I was also glad to see the discussion of P and t values and their importance in research, as it was a really interesting part of my economics major to see how studies were conducted. I hope that my previous experience with statistics will come in use during this graduate program. I also enjoyed the Dix Ch. 9 article and how they discussed the Heuristic evaluation, we had touched on these in my undergraduate ‘Usability’ class and I enjoy using an app or a webpage and thinking about how they’ve been applied. Charles, I like that you pointed out Wicken's experiment types and the objectively good/bad products. I didn't really consider discussing either of these in my response to this weeks reading, but the way you responded reminds me a lot of other readings and lectures I've been to on how products are invented or revolutionized. Too often, it seems products are given "extra features" for the sake of doing something meaninglessly different that distract from products, but sometimes they are done poorly. I own the Samsung S2 Smart Watch. I like it a lot. It functions well as a phone and a watch, but beyond that it holds little value for me. It COULD do so much more (I'm trying to learn to build my own apps for it). I was also extremely disappointed that the SmartThings app is not in products yet for the S2 (which it was deceivingly advertised and is also owned by Samsung). Mechanical keyboards now come with distracting, but very cool lights. Makes a great expensive techy purchase, but I also wonder at the necessity (though maybe I'm a hypocrite as I type on a mechanical keyboard at work.. but I don't think it's really that much better than my built in laptop one). Shoes can track your steps, but do little more than that. The sensors is inconveniently on the bottom of your dirty shoes. It is dependent on a phone. Smart home devices are extremely expensive, for what they are and how poorly they are built. Many of the devices are built for battery installation so you do not have to wire them into a wall, but there are rarely other options to do this without the battery. Why don't they come with rechargeable batteries or solar panels? It just seems all too often, things are designed because they are cool new ideas without trying to solve problems or ask the question is there anything wrong with the current design? How could this be better? OR they do ask the questions and do not try to encourage REAL conversation or negative feedback. Growth in HCI often comes the most from negative feedback, not feel good answers. I really like the idea of doing a pilot study before the real study. This makes a lot of sense as a test run, but ideally in software development, I like doing multiple evaluations and tests. The process becomes very cyclical: build, test, revise, repeat. The problem with this is the software is only "new" once. Finding lots of users who are unfamiliar with the system can be hard for me since I work in an industry where everything is secured or confidential. Something else I wondered about the multitasking study was if the users had high familiarity with the types of tasks, perhaps the task itself was hard for them? In which case, does that reflect on their multitasking abilities? I would probably consider myself a HMM or high media multitasker vs. light media multitasker (LMM). I wonder if the results from the study or takeaway could be used to identify self-deficiencies in this area for improvement and what multiple follow-ups to the test would produce. I doubt that picking out red or blue rectangles would be hard, but you never know. Perhaps the task was distracting in itself because it was boring? I wonder if motivational factors should also be considered. Since the HMM and LMM are also self-identifying, wouldn't it be best to have a survey or tasks that also confirm their assumptions? Like, what makes you a multitasker? Do you frequently have multiple tabs open at the same time? Does it bother you to have music or a TV on while you are doing something else like work or homework? Do you perform well in high stress or high anxiety situations? Do you like to work ahead? Maybe also define for them, better, what that means? Lab tests are never going to be just like real life. They are unfamiliar spaces for the participants and people are aware they are being watched in a way that is hard to forget. So, another question I have is how different this would be if it was made it to some kind of similar test or even a video game. That way it could be tested on a larger audience without the nuances of lab testing. (Wickens, 2008) does discuss that there are varying methods to testing that involve less controlled and more realistic observation. I would like to see how some of these tests could be conducted using mixed methods. How they are setup, analysis is done, synthesis, and learn more about the backgrounds of individual researchers.