February 07, 2022

Who is the “I” in AI?

Who is the “I” in AI?

What will happen in the future of AI is very interesting to see where we go. The next ten years are very important to how we grow. We will see a lot of things happening but we have to be careful. It’s a warning I’m putting at this point that as an organization, we have to be very careful and as a society.

Subodha Kumar, professor of marketing and supply chain management

Advancements in artificial intelligence and machine learning, or AI/ML, have shaken up the way business is being done. Now we’re hearing how Amazon has more robots than employees and wondering whether we’ll still need to work in 30 or 40 years. 

In this episode of Catalyst, Subodha Kumar will guide us through the ins and outs—and impacts—of what AI/ML means for the future. Which industries will be the most affected by AI? How will our jobs be affected? Where are the opportunities in AI that we haven’t thought about yet? How can this backfire? 

Kumar explains that in the machine/human relationship, we can’t program innovation. This means that the “I” in AI is always backed by real people. And that “I” is not just coders, data scientists or algorithm writers. It’s the humans with knowledge and experience behind them: the truck drivers, the grocery store clerks and the HR representatives. This expertise is needed if we’re going to use AI to enhance productivity and unlock new ways of learning and working. 

Catalyst is a podcast from Temple University’s Fox School of Business about the pivotal moments that shape business and the global economy. We interview experts and dig deep into today’s most pressing issues. In this season, we’ll interview experts on everything from how hip hop influences consumer behavior to what’s next in artificial intelligence. Episodes are timely, provocative and designed to help you solve today’s biggest challenges. Subscribe today.

Resources

Full Transcript

Host: Welcome to Catalyst, the podcast of Temple University’s Fox School of Business. I’m your host, Tiffany Sumner. Artificial intelligence has become a part of our daily lives at home and at work. From driving a car to hiring the right candidate, AI is everywhere. But who is the I in AI? 

Our guest today, marketing and supply chain professor Subodha Kumar, explains how the professional workforce, from HR reps and coders to grocery store workers and truck drivers, will shape the future of artificial intelligence through experience and innovation. While some jobs may become obsolete, new opportunities will emerge. 

Subodha explains some of the ways that AI may change the career landscape and he assures us that our unique human traits will continue to make us marketable—even if in the future we are working more closely with machines. 

Hi Subodha, thank you for joining me today.

Subodha: Hi Tiffany, thank you for having me.

Host: So kicking us off: Why are we as humans so intrigued by artificial intelligence?

Subodha: See, the most fascinating thing about AI is that we think machines can behave like humans and that has fascinated us for quite some time. You know, for several decades we have been looking at AI like that, asking if they can behave like humans. 

The answer is yes and no. The intriguing or fascinating part about AI is that it can do a lot of things that either we can do or we can do very efficiently. So that’s where they said that, yeah, they can behave like humans in sometimes more efficient ways. 

But at the same time, you know, we use the word ‘artificial intelligence’ so that means we don’t think it’s real intelligence. There is something artificial about that, that makes it different from humans and also in our fictional world we have created artificial intelligence as something which is very much like a person but in reality, we are not even close. We have some good applications and I’ll go to a very famous quote about, you know, do machines think like humans? Now you have to think that even no two humans think alike, right? So even that statement is not entirely correct. Can machines think? Yes, we have enough tools where machines can start thinking but they will think differently—not like what humans think. 

Is it good or bad? For some applications, it’s very good that they don’t get perturbed too much by emotions, but at some other things it will not be as good, where maybe we need emotions or something else. So I think the answer to this question is that we are intrigued because of two reasons: the first one is that the whole idea is very interesting to us that something else or machines can do what we do and second thing is that it can help us in achieving something that we are not able to achieve otherwise.

Host: So, which industries will be the most affected by AI?

Subodha: So in order to understand which industries will get impacted, first we have to understand how this whole AI thing got involved. So we started with things like where we could do simple calculations like calculators. You can think of the early applications, right? We can do fast calculations that we could not do. 

Then we went into things like manufacturing. We could have robotic arms, where they don’t do anything more than what humans could do but they could do in a lot more systematic way. Then, we moved on to playing chess and games with AI to do that. 

Moving further what we see now we have a lot of prediction algorithms—the machines reviewing savings. Even more than that, Amazon is using a bunch of robots in their logistic centers, FedEx is using them for their sorting centers. So we see a lot in manufacturing. If you move further we have driverless cars. 

So if you really want to break it down where the real action has happened, or is going to happen, the one biggest thing is transportation. I think we’re going to see a lot of things happening in transportation. Not just not autonomous vehicles. We will have AI-powered robots and we had one of the largest fulfillment centers in Delaware open recently and it has more robots than humans in the fulfillment center. So we can see the volume of that. 

Another one is healthcare. The reason is very simple in healthcare, so the first thing in healthcare we cannot do—nobody can absorb so much data at the same time, so AI can help us in providing very good recommendations. Second, we have experts and physicians who can’t reach everywhere so we are using AI to see how we can help them. And here I’m not at all saying that, you know, they will replace physicians—not in our lifetime. AI and machines are not going to replace that position and they should not. But they will help in drug discovery—speed up the process. They can help with virtual nursing, those kind of things. I think they’re very very helpful.

Host: Yeah, because that’s one of the fears, right? It’s when the robots start to replace us that we get nervous or the more sci-fi equivalent, when the robots ‘start to feel’ which you’re saying is right now at least not technically possible. But do you think that fear that AI could replace human workers is valid?

Subodha: Well, it is valid. Yeah they are replacing human workers but I don’t think the question should be that AI is replacing human workers, the question should be: is that bad for us? When driverless cars come and Uber is replacing all their fleet with driverless cars in near future, we will need less drivers, no doubt about that. We will have driverless trucks, so we don’t need as many drivers. So will they replace human workers? Of course they will. 

But that’s not the first time they are going to replace humans. When we moved from agricultural to industrial economically the same thing happened, right? When a lot of tasks which were done by human workers are now done by machines. Is it bad? Now that’s an interesting question, for many industries, it is not bad because what will happen is that customers are asking for better service and better products. You can give that only if you make things more efficient. So what AI and machines would do, they would help us create a better product, better services and in the process, some of the tasks will be replaced by machines. 

But we need to teach to these workers to do other things. For example, when you think that the driverless trucks or driverless cars will come, who would help us in designing those algorithms in training those machines? These drivers are the best people to do that, right? So we need to retool them that how they can help in the new tasks. We are having chatbots for customer service so chatbots need to be designed how to react to different kind of questions. That’s where we will need the experts who are good in that so for many tasks, actually, AI replacing human workers is a good thing because then human workers can do better things and give little low-level tasks to machines. 

However, for some of the tasks, it will be a problem. Where we don’t have a replacement and where we basically are at trouble is where either you cannot retool people or where you don’t need these people to do advanced things. Now, what do we do with that part of the workforce? That’s where the policymakers and all have to think carefully and that’s where people get a little worried so I will say the answer is yes. It will replace, in many cases, it will not be a problem. But in some cases yes it will.

Host: And it also sounds like you’re saying that AI can create jobs?

Subodha: Yes it will create many jobs. It will create many jobs, it will provide better products and services and it will create many new jobs. So a lot of the people who could not find a job may find a job now. They can find better jobs.

Host: And some of the people will probably need training and up-skilling on how to do the new jobs that might be created.

Subodha: So many state agencies right now, they’re creating training programs free for a certain group of people who can be trained for other skills, right? I gave an example of drivers. 

So already these things are happening and I’ll tell you that here the government has to play a very important role. We cannot just trust everything that Amazon will do it for us, you know. When Amazon started growing in Seattle, they did not even spend money on building highways—forget about training people. They’re not going to do that, so we can’t trust that all these large organizations will do that. 

So the different agencies have to come together and have a proper plan—which is happening, but I don’t think it’s happening at the scale it should be.

Host: So it’s really interesting to hear you talk about the relationship between people and AI in a way that we don’t normally hear about. Can you talk to us a little bit about where innovation may take AI next?

Subodha: You know, we are talking about industries like transportation, manufacturing, healthcare. but I’ll tell you some of the places where we haven’t seen AI as much but we are going to see. For example, in the education sector, I think AI is going to be the big thing. A lot of virtual tutors and assistance will come, that will help—again, I’m not saying that AI is going to replace all the teachers and professors, that is not going to happen in the near future. But here they are different sectors of our community that require a lot of virtual assistance that is already going to happen and sometimes they can be trained for very specific things.

 Customer service we are going to see more and more companies moving towards bots. Now the problem with all the bots is that they are not human-like right? So we say that the bots don’t give the same experience as when we talk to a human. There’s a big debate going on right now, should the bots have personality or not? 

Now there are two schools of thought in that. One school of thought is that, no one should be like a human in order to tell exactly how people respond so that people get the same experience, but on the other hand, maybe it’s good not to have the personality. We know that I’m talking to machines. I’m not interested in being friends with them, that’s what many people believe. I’m interested in getting the job done right. So why do I need to worry? Sometimes it can be confusing if they have too much personality. So the things need to be resolved, but we will see a lot of progress happening in that. 

Some of the industries which have been a little untouched right now, like construction, AI is going to play a huge role. Everybody wants to see how my house is going to be built, how it is going to look, where I will put my what kind of furniture—AI combined with virtual reality and mixed reality is going to play a huge role. 

Agriculture we haven’t used as much, but things like blockchain coming into a place where blockchain essentially requires that everybody needs to be connected right now if we don’t go into the core of agriculture and bring AI and machine learning there we cannot see that as a reality. And we will see many combinations of learning from one industry to another so what we are learning in healthcare may go into the agriculture industry. 

So those kinds of cross-learnings in AI are going to happen that have not happened yet and we will see a lot of those things happening as we go further. So a lot of things will come as we progress eventually. So as long as we are careful about how we are using AI and about ethical and responsible AI, I think we will be fine and we will see a lot of good results.

Host: That is a great setup for my next question, which is: How can all of this backfire? If you have specific ways, and since you mentioned ethics that this could backfire would definitely be interested in hearing that as well.

Subodha: Sure, yeah, it can backfire in many ways. I will start with very simple examples. So what we are saying is that right now in HR there are a lot of companies that are moving into AI which starts with whom to hire, who will perform well. So they try to get all the data and see what kind of people performed well in what kind of positions. They use that data for keywords and different advanced neural network-based algorithms and try to understand whom to hire. 

What about our hiring system and bias already in the system? All the machines are trained according to that, and this bias could be in many ways. It could relate to gender, it could be racial, it could be about ethnicity—all those things. How do we make sure that we are training our machines; they’re not trained on biased data? 

In my mind, that is the biggest challenge we have with AI right now. So we must be extremely careful on how we are building our AI systems. Twenty years from now it will be very hard to think back on how these things were built, how they were trained, right? It would be very very hard to understand that if I don’t take action now, we may be in big big trouble. 

The second thing in AI is about innovation. How Facebook started, how Twitter started, how Google came into the picture, how Netflix came into the picture. These were all based on some kind of situation. We were in some kind of trouble—we had a supply/demand mismatch and these companies came into picture. Things like, I’m not able to find these weird books or I’m not able to watch certain kinds of movies—that’s when Netflix came into picture, right? Or Uber came because I’m not able to book a very expensive car. So a lot of these things came out of innovation. 

In some sense, AI can curb innovation because AI will be doing things that they are trained to do. They will learn from that, but how they will create new innovation is not clear. I don’t think that AI can lead to a lot of innovation. We are trying to say that we will create products which will lead to innovation but this is very fuzzy right now, and I think that’s a very big concern. 

So what may happen is that we may do things very efficiently, we may be very quick, but we will start getting to a certain point that we are not growing as a society. So we have to worry about that, how we can make that flow happen. 

In addition to this, like in healthcare we’re going towards AI, which is a great thing. But in healthcare, a lot of things depend on new innovation. Can we make sure that over reliance on AI does not curb that? One more thing I will add here is that a lot of AI is about humans helping machines, right? But machine-human collaboration is extremely important. I think in AI, the machine has to help human workers and human workers have to help machines. But ethical AI is a big thing. I think every company, every organization, every society right now should make a good effort in making sure that whatever AI they are building is responsible and ethical.

Host: You just raised so many great points. For starters, the relationship between humans and machines. I think in some ways, there’s also maybe an unrealistic expectation that AI is just going to keep learning on its own, right? Machine learning and all of that it just sort of self perpetuates but it’s not true without the human element, is basically what you’re saying. And I think the other thing you’re saying is that you can’t really program–I mean a lot of things you can’t necessarily program creativity or innovation but AI is not going to innovate for us. How can we make sure that we continue progressing as a society? 

Subodha: Well AI is not doing innovation for us yet, that’s all I’m saying. Maybe in the future, we will have a different view. So what we have right now, it doesn’t look like it’s happening. What will happen in the future of AI is very interesting to see where we go. The next ten years are very important to how we grow and we will see a lot of things happening but we have to be careful that’s all. It’s a warning I’m putting at this point that as an organization, we have to be very careful and as a society.

Host: Is there anything else about AI and the future that you’d like to share?

Subodha: Yeah so the only thing I will say, you know, if we try to summarize what is happening and what we can learn as an organization, as a consumer, or as a society there is a lot of good potential for AI, especially in things like healthcare, in education and transportation. We have to take those positives and we have to accept that if I’m an organization, I have to really be very watchful of what is coming out, what can make me more competitive in the industry, and what kind of solutions will work for me. If I’m a consumer, I can save money by doing things more efficiently, so clearly, AI is going to help me. 

For example, if I want to build a house, I can ask to use AI for that and see how my house will look like, how my furniture will fit, and so on. So this is good for them. As a society as well, I think there are a lot of positives in the sense whenever we get new solutions it will work. 

But at the same time, I see a lot of problems, more so than what we saw with the Internet or social media because there’s been some new kinds of challenges that we have not seen. Some more fear of the unknown than anything else. If we don’t know yet what kind of new challenges it can bring and unless we see them we can run into huge problems. 

So what can we do? The solution is that whatever we are building, we must sit down and think about what could go wrong. We have to have a plan or chart rather than just thinking about whether I will save X amount of dollars or X amount of time by doing that. We have to think to ourselves what could go wrong, what could go wrong, not only what will go wrong and have a list of that and what is my plan as a society and policymakers. 

I will say that we have to be ahead of the curve. If we don’t we can lead into issues even though AI is very promising. I feel that there are many small nuggets that can cause trouble so we have to be very careful with that.

Host: What advice do you have for students or anyone really in the workforce as AI is becoming more prominent? What skills do you think they need or what skills should they work on to be successful in a workforce where there’s more AI?

Subodha: First of all, there has to be AI whether you like it or not. Those days are gone when you say ‘I don’t like AI,’ you know, now what kind of tools do you need to know? It involves a variety of things. How to deal with data, how to design algorithms, how to make business decisions out of it and then how to manage this. 

Now all these four pillars that I’m talking about here are important if you really want to train yourself. I can tell you that the easiest is to train how to deal with it but at the same time that is the place which is getting more crowded in terms of the supplier right because as I told that’s the easier part to get into. so more and more people are getting into how to deal with data but still, I think we need more people. So that’s the first skill you need. We have simple tools like Python; Python is going to become like the English language. Everybody needs to know it at the end of the day. So if you are a student going into workforce, make sure that you learn those skills. 

Second, how to design these algorithms. Now, not everybody needs to design the algorithm but everybody needs to understand how these algorithms work so it doesn’t matter. 

The third piece is where I think we have a lack of people right now is how to convert all these things to business decisions. I don’t think we have enough people on there. So, in fact, the business analytics programs are focusing a lot more on that aspect now our data science programs that we understand all those things. How do you make business value out of it in a more socially responsible way? I think that is the part where the students I will say need to get more skills and they should focus a lot. 

The final part is, how to manage these people. That is more for the managers but everybody needs to manage people who are trained in AI or maybe you have to manage AI itself. So that training or skill is very important. I don’t think even with the courses and all we have is enough so that training I think needs to be built. I think that is the next step where all our curriculum should go to all, the students to worry about getting trained. 

Host: I would like to thank our guests Subodha Kumar for joining me today to talk about the “I” and AI. It is clear from our conversation that we are just scratching the surface of how AI will play a role in our professional and personal lives. Students will eventually be as comfortable in Python as they are in English and the rest of the workforce should find ways to add AI to their resume, too. 

But most importantly, business leaders must learn how to convert data into business decisions. We are in the midst of an exciting time, but as Subodha reminds us, the future use of AI relies on us. We are the “I” in AI so it’s up to us to ensure that AI is being used in an ethical and responsible way. 

Join us for our next episode of Catalyst featuring the CEO of the University of Pennsylvania health system Kevin Mahoney for a discussion about the future of healthcare. 

Catalyst is a podcast from Temple University’s Fox School of Business subscribe wherever you listen to podcasts and visit us on the web at fox.temple.edu/catalyst. We are produced by Milk Street Marketing, Megan Alt, Anna Batt and Karen Naylor. I’m Tiffany Sumner and this is Catalyst. I hope you’ll join us next time.