dgst395

Week 10 Summary

AI and Ethics

The following is a response to this given prompt:

Using your experience with the simulations — and taking for granted that we all hopefully agree that AI should behave ethically when possible — how do you think designers of AI should account for potential ethical decisions?

  • 1. Allow the AI thousands of simulations of every possible scenario until it learns to seek the best possible outcome for humans?
  • 2. Create a list of rules describing the best behavior for every conceivable scenario?

This a difficult decision to make because neither is perfect. The first option feels almost too risky and potentially too difficult to do accurately, though if it were to work as planned and hoped, it could be a phenomenal idea. The second option is dangerous because bias of the person who input the rules can be reflected in the behavior of the AI. Theoretically, designers of AI could get input from various individuals, or even use responses from surveys or polls, to create the most accurate idea of what is ethical and what isn’t in certain scenarios and how to create rules for them, but it will remain difficult to define the “best” code of ethics to follow. Because of the potential problem of bias shaping an AI, I would have to go with the first option.

AI and Race

Sinister shit are the little things buried within code, AI, and programs that can potentially be harmful or dangerous. An example is facial recognition software not being able to detect the face of an African American person, specifically women. This is most likely a result of the sample faces used for development of the software having little diversity, being particularly white, making it difficult to recognize that a person with black skin is even a person. AI can also be used in law enforcement, speeding up processes and making things less work for actual humans, but making questionable and seemingly biased opinions. For example, African American may be more likely to be imprisoned by AI because they followed previous biased trends from human operators.

AI and Creativity

AI surrounds us, even though oftentimes we don’t realize it’s there. Today, I got in my car and plugged in my phone, allowing it to charge. I opened my phone using face ID and scrolled through the pages of apps, until I found and opened Spotify. I then navigated to my library and opened the playlist I wanted to listen to, pressing the shuffle play button to have it shuffle the tracks. After that, I swiped out of the app and opened Google Maps. I typed in the destination I wanted to go to, pressed the button to get directions, starting my route in the app. In the minute in which that all took place, I utilized various AI without even realizing it.

In terms of the “Bot or Not” quiz in class, I finished 5th place and stayed in the top 5 for almost the entire duration of the game. A lot of it felt like gambling, taking a 50/50 shot in the dark and hoping you were right, especially with the text questions. The most surprising wrong answer to me, though, was the image that Professor Whalen generated from the Discord bot using just the word “follower.” It looked planned and prepared by an artist, not generated in a few minutes by a bot. I was extremely impressed, while also simultaneously realizing that bots are much more capable than I had previously thought.

Creativity is the ability to brainstorm ideas and use various materials to create an image from one’s imagination. Or is it? It seems as though AI can be creative, but since it’s a man-made concept, could it even possibly apply to a computer? Would we ever consider labelling any other animals as creative, as they’re also living, or is there something strictly human about being creative? I think that it would be more appropriate to label a computer as “operating creatively” rather than being creative.