Webinar Q&A

*This page was updated May 11, 2020 with additional content.

Participants of the UMaine AI Webinar had many questions for the panelists that could not be answered during the live event. We have compiled some of these questions with answers from the experts here. Other questions were answered live during the webinar and may not be included here.

If you have questions for our team of AI experts, please email research@maine.edu.

General Artificial Intelligence (AI)

Granted that AI can be useful and powerful – but how does it earn the name “intelligence” (as in, artificial intelligence)? Does that naming invite unneeded fear and concern?

Roy Turner: The “intelligence” comes from the ability of AI programs to do things that, if done by a human, would be thought to require intelligence. For example, the MYCIN program in the 70s was better than 95% of human expert diagnosticians in diagnosing bacteremia, and we usually think being a doctor requires intelligence. Early programs could ace large sections of IQ tests (though whether they themselves mean anything is up for grabs, of course). Current programs can drive vehicles, find lesions in radiograms, predict the efficacy of drugs, determine the sentiment in natural language text, understand natural language (to a fair degree), create plans beyond the ability of expert human planners, and beat world-class champions in chess and Go. I concur that the name may be unfortunate, though – you have an excellent point about fear and concern. But given the scope of AI’s use (and misuse) now and in the near future, maybe it’s okay to be a little concerned?

Jim Isaak: Besides the early use of the term 1950s and the early concept of the “Turing test” before that, what is “AI” has been a moving target: at one time, beating a human at checkers, then Chess, then Go or Jeopardy. Today, “Artificial General Intelligence” (AGI) is the term for ‘indistinguishable from human”, and AI tends to be the more limited scope activities such as beating humans at again, diagnosing disease, or “manufacturing personal data” as was pointed out in this event … is there unneeded fear and concern as a result? … hopefully not from the name, but many experts have serious concerns, not just about the possibility of AGI, but the potential uses/abuses of AI. There should be a concern, maybe even fear, but on an informed basis – directed at where personal privacy is lost, AI-driven persuasion emerges, and somewhere in the future, AGI.

Auroop R. Ganguly: This question makes me think of a (by now somewhat dated) book called “The Emperor’s New Mind” by renowned physicist Roger Penrose. The book explains the big difference between human versus machine intelligence and most of the central theses (if not all the details) remain true to this day. That being said, machines have been solving problems that were previously considered in the realm of human intelligence, so the goalpost has certainly been shifting. Regarding “unneeded fear,” I am not sure I feel that optimistic, especially when we think of potential AI impacts on warfighting and crowd control, surveillance and loss of privacy, hiring and policy bias, and such. Like any technology, what we do with AI is in our control, but given the world we live in, if the name “intelligence” instills enough concern to make peoples and societies come together to draft Geneva Convention like rules and treaties on what can or cannot be done with AI technologies, that may well be worth it in the long term.

Can you address the issue of bias in AI coming from the fact that people make algorithms and each person has biases?

Roy Turner: The algorithms are generally neutral; rather, it’s the data, and for machine learning, the training data, that can skew the results, as has been unfortunately shown by example.

Also, it depends on what the algorithms are. Most AI doesn’t develop algorithms that are domain-specific, and so whatever biases the author brings to the table aren’t directly related to particular outcomes; that doesn’t mean that he/she doesn’t have biases about how he/she thinks intelligence works or should work, however, so in that you are absolutely correct. Rather, though, most biases in current AI comes from the data the algorithms use or are trained with. For example, a deep learning system’s structure is in essence just a set of layers of neurons connected to other layers; the algorithms are just mathematical operations on tensors. Neither of these has any particular relationship to the authors’ own biases with respect to particular outcomes (though perhaps to the kinds of outcomes there can be–recognizing sounds rather than images in a video stream, e.g.). However, when trained, if the data is biased, the deep learning system will indeed reflect those biases.

Jim Isaak: There are multiple sources of bias. Poor software engineering, often from poor requirements (“software engineering” is no longer a licensed profession in the U.S, although excellent IEEE and ISO standards exist for best practices here) … deliberate bias is possible as well. The selection of the training data set – again by ignoring diversity, intentional limitation, or most likely – just using the data available, will bias the AI. Detecting the bias is difficult, perhaps impossible, in some cases. Just as humans have biases and often cannot articulate them, we can’t expect developers or AI’s themselves to be aware of these.

Yonggang “Tim” Lu: To address this issue, it is important to understand that bias is one necessary component of every AI algorithm. There is no AI algorithm that does not include any bias. In fact, any effort attempting to create a completely unbiased algorithm itself represents a harmful bias. The key here is not about bias vs no bias – but about reasonable bias vs unreasonable bias. Take a good example. AI algorithms have been used to screen applications and resumes to identify qualified applicants for different job positions. Some studies of such algorithms have argued that the AI algorithms show gender bias against female applicants. Now, the question is: Should we have an AI algorithm that can maintain absolute gender equality in all job categories? The answer is probably no. For instance, when screening job candidates for firefighter positions, it is probably reasonable to have some bias in favor of male applicants. On the other hand, when screening candidates for loan officer positions, it is certainly unreasonable to have a bias towards either male or female applicants. So, the critical point here is AI itself cannot address the issue of bias. It requires human intervention to do so. More specifically, it requires high-level subjective judgment on which bias is reasonable or unreasonable in different scenarios. And then, AI algorithms will do their work under the guidance of such human judgment.

Harlan Onsrud: One of the primary bias problems related to AI is with the very large databases being mined. “Because massive data collections often reflect current and past discriminatory practices, machine learning inevitably incorporates those biases into choices offered to individuals. The discrimination becomes embedded almost invisibly into the automated systems.

For example, some users of B&B room reservation systems with distinctly sounding African-American names have been less likely to be accepted as guests at certain establishments. If machine recognition picks up such a pattern, it will typically affect guest rental offerings to future parties with similar names. This has the effect of embedding discriminatory practices in offerings even though it may not have been the intent of the software designers or a corporate entity to constrain offers in such a manner.” Many more examples exist. See for instance, page 6 of Being Human in an Algorithmically Controlled World.

We as humans, whether or not we are code writers, all have biases. In addition, human coders often have little idea of what an AI “learner” has learned and incorporated into an algorithm. This becomes problematic when incorporated biases result in prejudicial actions resulting in injustice whether or not those prejudicial actions are against the law. For detailed discussions of algorithmic biases and potential resulting prejudicial actions, good places to start might be recent books by Cathy O’Neil, Shoshana Zuboff, David Sumpter, and Pedro Domingos. All of these books are listed in our most recent bibliography for COS 490 Computers, Ethics and Society found here. Several scholarly articles cited in the previously mentioned “Being Human” reference also discuss these issues.

Auroop R. Ganguly: This is a major concern for the development and deployment of AI. The biases stem less from the algorithms, per se, and more from the biases inherent in the data which are picked up by the parameters or implementations of the algorithms. However, perhaps algorithms can be designed to remove the bias. Furthermore, questions like how to best remove bias or embed bias correction, especially given that one person’s illogical bias may appear perfectly logical and axiomatic to another person, may quickly lead to ethical challenges. Indeed, embedding ethics into AI is a major area of study.

UMaine AI

How can current UMaine students get involved with UMaine AI and, in particular, the Sekeh Lab?

Salimeh Yasaei Sekeh: Very simple, the students can reach out to Dr. Salimeh Yasaei-Sekeh via email at salimeh.yasaei@maine.edu and share their interest in joining Sekeh Lab.

Harlan Onsrud: Dr. Sekeh will be teaching a new graduate course in Fall 2020 on Statistics Foundations for Data Science. This and several other new graduate data science courses are listed in this news release. Please notice that the prerequisites are being kept at a minimum so that students from broad ranging STEM areas are likely to be able to gain entry to these courses.

Roy Turner: In general, all of our labs are welcoming to undergraduate and graduate student involvement. My lab, MaineSAIL, has a history of involving undergrads in research, including having them co-author published works with grad students and faculty. If you’re interested, contact the lab directors directly (Dr. Yasaei-Sekeh for her lab, Dr. Hahmann for the SKAI Lab, or me for MaineSAIL).

What sort of resources will be available moving forward for labs that want to apply neural networks? I have students using the Matlab Machine Learning Toolkit, but I’m always nervous…wanting to pair with experts. Is there a working group to attend? Hands-on workshops?

Roy Turner: With respect to people resources, you should contact the directors of the AI labs to see if there are areas of mutual interest and/or if they can provide advice and help. There are often lab meetings that are open to interested parties, at least in my lab (MaineSAIL). There is a local group (not affiliated w/ UMaine) focused on machine learning that meets; flyers are usually posted by the elevator in Boardman as well as elsewhere on campus. With respect to computing resources, we received a $350K grant from NSF to support the acquisition of a GPU cluster to support machine learning, visualization, and scientific modeling and simulation. It will include several petaFLOPs of processing power, and it will be available to anyone in the state with a need. We are in the process of acquiring the first batch of equipment now, with more to follow in the fall. If you are interested, contact me (rturner@maine.edu) over the summer for the status and information about using it.

Auroop R. Ganguly: Pairing with experts is a great idea but there are also a variety of online resources such as freely available video tutorials and open access codes in Python, R, and now Julia. The AI community usually attempts to make all research papers open access. Benchmark data sets are also available. Understanding the problem that must be solved with AI (or other technologies) is always a good idea so that the exercise does not become too much like a solution searching for a problem.

We are starting up a new cybersecurity degree program. In the process of getting up to speed on the advances in cybersecurity, it was discovered that AI is beginning to be used as a tool for the fast understanding of an attack. So will this new UMaine effort be exploring this area?

Auroop R. Ganguly: I cannot answer questions specific to UMaine but certainly AI is being used more and more for cyber. However, occasionally the AI used in cyber tend to be specialized in the sense that one may need to think beyond vanilla utilization of off the shelf tools.

In addition to this (excellent) webinar, what are some other methods being used to build Maine’s resource network, to help us all tap into available expertise for shared projects and programs?

Roy Turner: We received a $350K grant from NSF to support the acquisition of a GPU cluster to support machine learning, visualization, and scientific modeling and simulation. It will include several petaFLOPs of processing power, and it will be available to anyone in the state with a need. We are in the process of acquiring the first batch of equipment now, with more to follow in the fall. If you are interested, contact me (rturner@maine.edu) over the summer for the status and information about using it.

What is the role of robotics in UMaine’s AI initiatives?

Salimeh Yasaei Sekeh: This is a long-term plan and at this point, we are not sure how deeply we’ll be involved in robotics. However, in Sekeh Lab many proposed techniques are applicable in Robotics and computer vision. Hopefully with University of Maine’s collaborations with other universities and institutions, specifically through Sekeh Lab, we’ll be able to explore more robot-related applications and projects.

Roy Turner: I can only answer from my own perspective, but my lab (MaineSAIL) focuses a good deal of attention on controlling intelligent agents, including autonomous underwater vehicles and other autonomous drones. At the moment, our research is focused on high-level control (given a set of goals, find a way to achieve them while surviving, handling unanticipated events, etc.), and it is done mostly in simulation. We do have some robots for use in the research, and there are autonomous underwater and surface vehicles on campus that we intend to work with down the road.

Healthcare and COVID-19 Questions

What AI approaches have been used in healthcare diagnoses?

Auroop R. Ganguly: Healthcare AI is a major area of research. The AI approaches have ranged from Deep Learning to Rare Events Mining as well as Nonlinear Dynamics. The diversity of the AI approaches used in healthcare diagnostics is really a reflection of the diversity of challenges in that domain.

Roy Turner: Medical AI has been a focus of the field since the 70s, with some outstanding limited-scope programs being developed early on (MYCIN for diagnosing bacteremia, INTERNIST-I for general internal medicine, etc.). These were limited in their application, though, due to practical considerations (keeping the knowledge updated, figuring out who would get sued when they misdiagnosed — I’m serious, this was directly from a colleague at the National Library of Medicine and from other sources). Recently, there has been a great deal of work in machine learning focused on things like detecting tumors in radiography (including some work on campus using support vector machines, for example), finding drugs, etc.

Jim Isaak: Various approaches. Breast cancer is one example, and with biases as a consideration. Early data sets were from one scanner with a broken pixel, the result was that this pixel ws considered an indicator. Evaluating Xrays, diagnostic “expert systems” and many other potential uses — such as the monitoring searches to see if a locale has an emerging abnormality.

How is AI being applied to our current COVID-19 pandemic?

Jim Isaak: One application that emerged back with SARS is tracking individuals online in searches, purchasing patterns, etc. This may point to emerging “hotspots”. A Canadian firm, Blue Dot, sent out warnings to their customers Dec. 31, 2019, using AI technology

Salimeh Yasaei Sekeh: Our current Pandemic is time-series data related and different AI algorithms including Recurrent Neural Network can be useful in forecasting the outbreak. Dr. Salimeh Yasaei Sekeh with her ML team is studying this as an on-going project.

Roy Turner: There has been some work done, I believe, on using AI to track the pandemic’s progress as well as to look for drugs to treat it.

Auroop R. Ganguly: Data sciences are being used for risk modeling and visual analysis, building computational models of epidemiology for vulnerability and risk projections, understanding and predicting the impacts of social distancing, demographics, and weather on disease spread, as well as generating environmentally sustainable recovery pathways for transportation, supply chains, businesses, economies, and communities, among other areas. The term “data sciences” in this context may include statistics, information theory, signal processing, nonlinear dynamics, network science and engineering, operations research, machine learning, and data mining. When exactly data science crosses over to AI has always been a mystery to me, but if we take an inclusive view then these tools and methods are being used significantly. By the way, in addition to data sciences, certain technologies ranging from game theory to agent-based simulations may also fall under the realm of AI and are being used.

Can AI be utilized to come up with suggested solutions for the pending economic crises coming out of the COVID-19 pandemic?

Jim Isaak: This hints at a level of intuitive intelligence that seems unlikely prior to AGI technology. However, AI can be applied to the questions like “given this data set, what appears to be the rate of infection, incubation period, recover period, contagious period.” and then what lock-down, social distancing, or other programs (with many different countries to choose from) appear to be relevant? — it may be possible for AI tech to identify best practices and when to apply these. Also, AI systems might be able to sort through various vaccination alternatives to identify those with greater risk or greater impact.

Auroop R. Ganguly: There are a lot of related AI success stories which can and need to be brought to bear on the problem in a coordinated manner. My response can quickly get too long on this, but the two word answer is “Yes, potentially”.

Privacy and Safety

In parallel with the more technical computer science education, what is being done to help students explore the ethical and social impacts of these technologies? In particular, many AI constructs revolve around the notion of a model — which is useful precisely because of its simplicity but can be frail exactly because of that same simplicity. How do students learn about the sensitivity of models?

Jim Isaak: While not specific to the sensitivity, I strongly encourage introducing students to the IEEE/ISO software engineering standards and the AI & Ethics work.

Auroop R. Ganguly: Sensitivity studies and uncertainty characterization are embedded within AI research and should be a part of AI curricula. However, when it comes to the ethical and social dimensions, a lot more needs to be done not just in computer science education, but even beyond, such as in engineering and the social sciences education.

Harlan Onsrud: All undergraduate students at the University of Maine are required to take a formal ethics course as part of their general education requirement. Further, they typically take a course addressing ethical issues within their degree program. By example, undergraduate students in Computer Science all take a course titled COS 490 Computers, Ethics, and Society. It explicitly addresses the potential ethical, legal, economic, political, and societal ramifications of the technologies that they will be helping to advance within their careers. Their senior capstone course also touches on these and related client and employer issues. At the graduate level, every student pursuing a thesis is required to take a one-credit but intense course in the Responsible Conduct of Research. Further, graduate students focused on information technologies have the opportunity to take a range of ethical and legal courses (e.g. SIE 525 Information Systems Law).

How do students learn how reliable models may be when they are applied to certain kinds and amounts of data? We have researchers and instructors specifically focused in data analysis domains. A new interdisciplinary Data Science and Engineering graduate program at the University of Maine will make new graduate courses and a cohesive curriculum in the area available to an even broader cross section of the student body.

Can you touch on the ways in which data privacy standards/laws are being upheld in specific AI applications, like autonomous vehicles, which rely on massive user-specific collection and monitoring techniques (e.g., computer vision, biometric monitoring, etc.)?

Jim Isaak: Europe and California have fairly strong Privacy protections, but other jurisdictions vary significantly. Advertising creates a strong economic incentive to by-pass, block or ignore protections, and of course “shrink-wrapped” licenses (“The big print giveth, the small print taketh away”) in which users relinquish their privacy with little or no awareness. Bottom line, if it gets collected, you can assume it will be abused…and it is being collected.

Harlan Onsrud: This is a huge and complex subject. As you probably know, the U.S. does not have a strong broad encompassing general privacy right for humans. Rather, most of our privacy laws are sectoral covering areas that have arisen as problematic over time and for which Congress has been willing to pass legislation to protect (e.g. health records, education records, financial records, etc.). A good introductory reference providing an overview of sectoral areas currently covered is perhaps the book by Cherie Givens in the bibliography referenced above. The first referenced article above on “Being Human” goes into the issue as to why Congress has been unable to achieve more and provides some specific examples.