So far in class, we've only talked about how A.I. directly relates to humans- how it mimics us, how it communicates with us, how it can be more like us. While it's fun to play with chat bots and stimulating to think about whether or not a machine can truly "think" or "understand", those are just a small part of the picture. Where A.I. actually impacts our lives right now is economically, and we should be concerned.
![]() |
It's interesting to think about whether machines can think, but not exactly important |
The most obvious way that machines are impacting us economically is through automation. As mechanical technology and A.I. improve, almost every job could be automated, and a huge percentage of people would be unemployed. Bear with me for a bit, but that reminds me of Star Trek. In Star Trek: The Next Generation, Captain Picard reveals that his family owns a vineyard and produces wine (because real wine is apparently better than the readily available "synthehol" produced by replicators), even though Federation society is almost entirely post-scarcity. What we discover is that since labor is not necessary, people do jobs as hobbies. While I don't imagine this is what will actually happen when/if most real-world jobs get automated, it's certainly a reassuring possibility.
![]() |
Captain Picard, pictured with tea. Earl Grey. Hot. |
Other possibilities for what happens when a large percentage of people aren't needed in the workforce are less comforting. Will the remaining people be forced to fend for themselves on the streets or in some sort of pre-automation enclave? Will people with real jobs exploit people whose original jobs got automated, and use them to do degrading things for little to no money? Will we as a society decide that we'd rather have people have jobs, and start to de-automate our world? Will the remaining jobs be carved up into part-time positions so everyone is "employed" but not enough to actually pay the bills? Will the people who lost their jobs to automation revolt against those who didn't? I'd rather live in the Star Trek future.
![]() |
Expect to see more elevator attendants if we decide to de-automate |
Robby only briefly mentions job automation in his article, as his main concern is that A.I. is developed exclusively as a means to make money, without any regard for the impact it has on our society. Think about it...how many A.I. projects can you think of that were primarily motivated by improving the world? Maybe an elder care robot, maybe an arty indie game, but even those might be money-first projects that just appear to be trying to help people. Even if the producers of the project aren't thinking about making money, the people who finance them will be, meaning big projects will never be completely free of a profit motive.
![]() |
Hellblade helped shed light on mental illness, but it also had a nearly $10 million budget |
According to Robby, the fact that A.I. algorithms are produced for monetary reasons leads some of them to be racist. He assumes that in a world where A.I. was produced with one eye on how the project will impact people, that such racist A.I.s won't exist. I'm not entirely convinced, since these are programs produced by people (for now). These people could simply make mistakes, which happens to the best of us, or they could be racists who intentionally make programs do racist things. I don't think it's fair to say that the pursuit of money is inherently going to ignore the needs of people. Ideally, consumers drive producers to make better products (that's how capitalism is supposed to work, anyway) that aren't racist.
![]() |
Nikon cameras had a bug where they thought East-Asian people were blinking |
While it may seem like job-stealing robots and racist camera algorithms aren't terribly related (besides the whole technology thing), they are. They both underlie a trend in technology: we, collectively, are being short sighted. If we don't change, and we only look to the immediate future of what we can get technology to do, we won't know what to do with it once we achieve it. Let's say Nikon figures out automated cameras that take perfect pictures every time. Now what? Well, photographers are out of work, but that's not a huge deal because they can do something else. Until we figure out how to automate that too. Then what? At some point, we'll need to decide what the end-goal is, because otherwise we'll end up at a pretty terrible place.
Further Information:
Your article brought about some very big concerns with the development of AI and automation within society. I agree with you completely in regards to people needing to be concerned about automation's role economically. Personally, I feel that the advancement of AI will systematically drive people out of jobs. Unless developers reconsider the 'big-picture' of AI, I fear that our future will look to be purely automated.
ReplyDeleteBut what does that mean for us in a society where human value is assessed based on income and job performance?
DeleteVery interesting points. We have already seen many blue collar jobs disappear such as factory workers as factories are becoming more and more automated. As AI keeps improving there are concerns that we could even lose white collar jobs in the future as well. However people will still need to help build, design, and program these robots as well as maintain and oversee these robots. This will force more people to get a higher level of education to be able to do these jobs. So while we might see more blue collar jobs disappear other jobs will open up for higher educated workers.
ReplyDeleteWhat about once we make algorithms to program programs and robots to maintain the other robots? It's nice to think that we as computer scientists are safe from job automation, but really we (or our colleagues) are actually working towards getting rid of our own jobs.
DeleteI have the same concern about automation drive people out of job. Yes, there are new job that are created, but many people are unable to have the skill for those jobs. I don't think de-automation will not be the key to stop this. It will be important to make automation profit everyone instead of only selected if we want to improve our society.
ReplyDeleteAny ideas on how we might be able to do that?
DeleteWe must ask the question, what is our end goal here? What is our end goal on this world? It seems that our main concern is what is best for us. What if I lose my job? What if humans are overpowered by a superior being? I am scared that something can create and achieve much more than I can without a hard limit. Is it best to leave things uncovered so that other humans can find it or should we design robots to find these discoveries and best humans in their field of creativity?
ReplyDeleteAssume robots discover humans are a drain on this world and decide to exterminate us as most sci-fi novels will attest to. Is our world better? Obviously, the human race is in worst shape, but who is harmed and what is gained?
If we disable robots so that we can feel superior, does that set a precedence? If we discover that elephants are capable of thinking beyond any human, will we need to exterminate a species to reduce the risk of having our world being taken over by something not human?
We as humans have already disabled other would-be superior creatures. We intermixed with and/or killed off the Neanderthals, we exterminated all of our natural predators, and we domesticated most of the animals that have similar traits to us (and could therefore potentially evolve into something like us someday) like dogs and horses. As such, I don't think there would be any moral concerns to hampering AI so as to keep our status as the #1 species. However, AIs deciding we are inferior and dominating us is not really the concern here. The issue is, if we make machines to serve us, and they do it so well that we don't need to work anymore, what do we do with ourselves?
DeleteI think this post bring precaution to future design of AI. We as a species collectively need to be aware of the fact that AI is a reflection of our images. We have short-comings and so does AI. As a result, it is a question of how to limit this rather than how to avoid it that needs to be addressed.
ReplyDeleteI think that if the creators of AI-related...things...had the primary goal of helping people (whatever their definition of that is--hopefully something like "helping kids learn to read" or "saving pets from burning buildings"), it wouldn't be so bad. But CEOs and other higher-ups love making and saving money, so I don't blame developers for appealing to a certain market. If they weren't going to build the thing, then someone else probably would have--and they would have made tons of money for it.
ReplyDeleteThe problem is, how can we get the higher-ups to want to help people?
DeleteI think you're right that most people would prefer some jobs (particularly those involving talking to someone) to be done by humans. That's why we still have human cashiers and circulation attendants in our stores and libraries even though we have the technology for self-checkout. Most people get frustrated when they call a phone line and get through to an automated response. But on the other side of this, it's worth mentioning that automated processes can be godsends for people with social anxiety or people who for one reason or another have trouble interacting with others. I definitely think there is a demand for human labor, but I also think we need to recognize that a demand does exist for automated labor. And as time goes on, automation gets more normalized. We don't bat an eye at the automatic checkout aisle in the grocery store anymore because we're used to it.
ReplyDeleteI like to be optimistic and think about the Star Trek scenario. I think it's unnecessary for as much labor to be done as we currently try to do, and as automation increases, the need for human labor may become very small or even zero. I personally hope that if this happens we can build something good on top of it. I'm a big fan of fully automated luxury communism personally. I love the Star Trek idea of the "jobs" of the past becoming the "hobbies" of the future. But I'm also very concerned about what will come of automation if we don't handle it well - particularly if the means of production (the machines doing the work) are owned by private companies or even by a corrupt government which doesn't equitably distribute the products.
ReplyDeleteIsn't a key idea of regular communism the concept of upheaval and forceful seizing of the means of production? If fully automated luxury communism were to get realized, it would need a fully automated luxury communist revolution that would (automatically and luxuriously) dissolve the corrupt government and/or private companies.
Delete