“Do You Know What It Means…” To Train a Machine Learning Model?
July 14, 2025
Joining LSU as an assistant professor of computer science and engineering this fall, Keith G. Mills shares his thoughts on LSU and the future of AI. From his point of view, AI is intrinsically good when viewed as a tool, same as a microscope or any other instrument used to magnify human ingenuity and creativity.

Keith G. Mills
First, why LSU?
A major career decision like this is a constrained optimization problem. You have to answer two questions at the same time. Will I be able to do what I want to do in research and teaching at the institution? And is the institution in a nice place to live?
I’m very excited about LSU’s moves toward AI. I believe LSU is set up to create some very good, practical AI solutions to real-world problems. I’ve heard about efforts like MikeGPT (an AI assistant for the LSU community to help navigate the university) by Assistant Professor James Ghawaly, as well as Associate Professor Nash Mahmoud’s Professor Index (an AI-powered review system of university professors). There’s also great infrastructure, such as access to high-powered computers. Additionally, a big shift as of recently is that big tech is taking more of an interest in Louisiana. For example, back in December, it was announced Meta is investing in a big data center in the state.
As for Baton Rouge and Louisiana, I can summarize my decision in one word: beignets. Or in two words, Cajun cuisine. But in all seriousness, I first visited Louisiana in December 2023, and it left a good impression. Moreover, I like having access to the amenities of a city. I’ve lived in Edmonton, Alberta, Canada, for more than a decade. It’s a city of more than one million people where I can get pretty much anything I want, but it’s not a giant metropolis like Toronto or New York. But I was born and grew up far north of Edmonton, in a town called Fort McMurray, and there, you have to appreciate the sarcastic advice people give you, such as ‘Drive 4 1/2 hours down to Edmonton and get what you need there.’ So, while there are a lot of R1 universities in very peculiar places, like in remote, small towns, where the towns primarily exist for the university, that’s not the environment I wanted to start the next chapter of my life in.
How do you see your research strengths adding to what LSU is already doing in AI?
I bring a lot of hands-on experience from both academia and industry with artificial intelligence and machine learning, so I understand what goes on under the hood. A key challenge of AI/ML research is that it’s one thing to make your code work in a software engineering way, without errors or warnings, but that isn’t enough, usually. I know what it means to properly train and evaluate a machine learning model and the challenges of deploying it.
At LSU, there are a lot of faculty focused on cybersecurity, human-computer interaction, and software engineering. We use AI in all those fields, but it’s important to remember AI and machine learning are best viewed as tools to solve problems. Performance and accuracy become relevant in solving problems, but they’re not end goals in themselves. I think the focus should be on feasibility, utility, and reliability.
How do you look at LSU's investment in AI so far?
I’m glad LSU isn’t transfixed on a certain subfield of artificial intelligence. Some people who are highly specialized tend to say their field will be the one to solve the next big problem and create artificial general intelligence, AGI. If your field is evolutionary computing or reinforcement learning, for example, you might put the cart before the horse and say, ‘This field is going to give us the answer,’ and create something like Data from Star Trek or M3GAN. I’d rather look at what problems we need to solve and how best to solve them. It’s an application-centric look I believe the phrase ‘Silicon Bayou’ is meant to embody.
Do you spend a lot of time thinking about the possibility of AGI?
I don’t think of it as being an event. I agree with Yann LeCun (vice president and chief AI scientist at Meta who’s pushed back against the idea robots will develop the kind of negative traits that drive people to hurt others). I think, gradually, we’re going to develop smarter and better artificial intelligence models and incorporate elements from different fields. And I don’t think we’ll ever be happy with our progress. We’re always inventing new terms, such as AI vs. AGI vs. ASI, or artificial super intelligence, and new definitions to try to make things better and push things further.
What do you see as the biggest current misconceptions regarding AI?
In terms of the general public, it’s the idea we’re one day going to flip a switch and now there’s Skynet or what goes on in The Matrix. I could see that potentially happening, but it isn’t likely. Our understanding of how to use AI and where it’s appropriate is going to continuously evolve. We’re going to deploy it and slowly give it control over performing tasks or making decisions we find menial, and it’s going to make mistakes. Then, we’re going to refine it, so it can’t make those mistakes or, if it does, the consequences are lessened.
In terms of academia, I think the core challenge is translating the deontological law of society (an ethical theory that says actions are right or wrong based on rules rather than consequences) into the objectives AI should use to interact with humans. The way machine learning models are trained is not like logical law. We don’t tell the model: do not steal, do not jaywalk, do not commit tax fraud or whatnot. You reward it or penalize it based on the outcomes of its actions. It first has to commit an action, and then you need to judge that action and provide feedback.
M3GAN covers this very well. (Spoiler alert.) The programmer in the movie tells the AI robot to protect and take care of a kid, which causes the robot to do a bunch of heinous stuff. It all falls under the umbrella of being a good companion and providing a safe environment for the kid, but that doesn’t preclude the robot from doing illegal things to the neighbor next door, or a bully that’s picking on the kid. Our challenge in developing AI is catching those intricacies and fully articulating and detailing our instructions.
Mills earned his Ph.D. in Software Engineering and Intelligent Systems from the University of Alberta, Canada, earlier this year, receiving the George Walker Award for Best Doctoral Thesis. At LSU, Mills will teach courses on AI and data analysis and mining. He is developing an introductory course to deep neural networks, or DNN, and an advanced DNN acceleration and compression course.
LSU's Scholarship First Agenda is helping achieve health, prosperity, and security
for Louisiana and the world.Next Step