Week 9 – Self-Ev(AI)uation
Artificial Intelligence
This week, we dug a bit deeper into AI and how it looks in the real world compared to how it appears in fictional media. It feels very fitting to be touching on this subject at this time as just the other weekend I watched Free Guy with my family, which touches on the idea of AI learning and growing on it’s own. When reading the excerpt from Janelle Shane’s book You Look Like A Thing and I Love You, I couldn’t help but think about Free Guy and other movies and video games that portray AI in epic ways, like iRobot or Portal. The media we consume often gives us a bit of a false understanding of artificial intelligence and how it actually looks in action. AI isn’t a killer robot but instead the Alexa device in our kitchens. AI isn’t (necessarily) a computer frying virus but instead the system that can identify pictures of cats in your 4,500 image camera roll. While there are times when artificial intelligence can be problematic, it’s far less extreme that an army of sentient machines attempting to take over the world and rather the inability to accurately analyze a résumé and potential future employee, for example.
Self-Evaluation
When approaching the self-evaluation, I initially found that I was comparing myself to others in the class. I’ve taken note of other students not participating nearly as much as they should be, slacking on readings and not completing simple and/or interesting assignments, throughout the course of this class, so I was pulling this to the forefront of my mind. Then, I recalled the conference I had with Professor Whalen where I brought this up, remembering that he wanted to give these students the benefit of the doubt and accept that we don’t know what is going on behind the scenes in the lives of my fellow classmates. Because of this, I reframed and began thinking about my performance in this class compared to what ideally my performance should look like. I think there is room for improvement in some areas. I’d like to participate more in discussions in class but I find myself doubting my ideas and getting anxious about having to speak loudly for everyone to hear. I feel as though I have good things to contribute, but oftentimes I keep them to myself, but I want to work on that. I’ve also found that I’m rushing into class last second or a couple minutes late, which is another thing I’d like to change as the class is already short enough. Beyond this, though, I think my effort on assignments and passion for the topics we address are quite exceptional, meaning that while my mind is fully engaged, I’m not outwardly showing it. Because of this, I still believe that I’ve earned a solid A, with hopes that Professor Whalen agrees and that I can continue to improve and be an even better student.
Morals and Ethics in AI
Artificial intelligence does not respond ethically or morally to things unless there are rules set by the creator of the AI that direct them towards or away from certain areas. Because of this, it is difficult to use the trolley problem to describe the inner workings of an AI. It would be quite rare to have an AI find itself in as deep of a conundrum as the trolley problem, deciding the fate of individuals, though it works as a good building block to understand AI decision making. In a scenario, does artificial intelligence make choices based on the impact of the outcome? Does it decide based on the most optimal “good to bad” ratio? In the example situation of two AI playing tic-tac-toe, one found a way to essentially cheat and win every time. In theory, this means that artificial intelligence by default looks for the best way, even if it has to get creative, to achieve the desired outcome, essentially throwing morality completely out the window. With that being said, it would be interesting to see if AI can develop morals based on widely shared beliefs from human beings. This type of behavior was present in the example of AI deciding the innocence of accused individuals, finding that it developed prejudice against people of color. Arguably, morals are a learned part of one’s identity, so does that mean AI can develop morals as well?
Leave a Reply