Webinar Transcript

Ali Abedi: Welcome to the University of Maine artificial intelligence webinar. My name is Ali Abedi, Assistant Vice President for Research Professor of Electrical and Computer Engineering at the University of Maine.

Ali Abedi: I will be moderating this webinar with my colleagues from College of Engineering and the School of Computing and Information Sciences.

Ali Abedi: It is now my great pleasure to introduce Dr. Joan Ferrini-Mundy, President of the University of Maine and its regional campus University of Maine Machias.

Ali Abedi: President Ferrini-Mundy holds a PhD in mathematics education from University of New Hampshire, she came to Maine from the National Science Foundation, where she served as Chief Operating Officer.

Ali Abedi: Her distinguished career spans the field of mathematics education, STEM education and policy, teacher education, and research administration. President Ferrini-Mundy is a member of the board on higher education and the workforce of the National Academies. Please join me to welcome President Ferrini-Mundy to our visual podium.

President Ferrini-Mundy, UMaine: Thank you very much, Dr. Abedi. Good afternoon, everyone. It’s a pleasure to be able to welcome you all to this first ever webinar on artificial intelligence sponsored by the University of Maine Artificial Intelligence Initiative.

President Ferrini-Mundy, UMaine: I’m sorry that we’re getting together under these very difficult circumstances but delighted to be able to be part of initiating a conversation among all of you that I hope will continue well beyond this event.

President Ferrini-Mundy, UMaine: As our world is rapidly changing, particularly through this pandemic, the reliance on digital software applications in AI. All of that is impacting us on a daily basis and will impact us in untold ways as we look toward the future.

President Ferrini-Mundy, UMaine: So being able to do that well, whatever we’re called upon to do, from online banking and streaming videos to autonomous vehicles to detection of cancer to a host of health-related applications.

President Ferrini-Mundy, UMaine: AI connects the world. The University of Maine Artificial Intelligence Initiative, which is UMaine AI, is dedicated to the advancement of AI and its applications in response to the needs of the state of Maine. It is a unique Maine-based venture that brings together University industry, government and community collaborators from Maine and beyond. To advance the field of artificial intelligence, and in particular to advance its applications and to promote its application and fields where it is not yet fully applied.

President Ferrini-Mundy, UMaine: The initiative leverages our universities’ strengths to bring a multi-disciplinary approach to solving problems through cutting-edge research and also of course leverages our wonderful partnerships across the state and beyond.

President Ferrini-Mundy, UMaine: This webinar will feature a diverse panel of AI experts working in education, research, and industry to present on advances in AI, applications of AI, education and workforce development, and social ethical policy and legal considerations related to AI. Thank you very much for joining us.

Ali Abedi: Thank you very much Dr. Ferrini-Mundy. Before we start the panels. I just would like to remind all the attendees to please keep your microphones on mute, and your videos off, unless you’re presenting. You can use the Q&A option at the bottom of your screen to post your question and at the end of each 30 minute panel the moderators will ask your question from the panelists. So now the first panel will be moderated by Dr. Terry Yoo from the school of Computing and Information Sciences. Terry, please take it away.

Session 1

Terry Yoo: Thank you very much. My name is Terry Yoo and I am an associate professor in the School of Computing and Information Science. This is a very exciting panel to start off this session today.

Terry Yoo: I’m joined by three distinguished panelists, Roy Turner, Salimeh Yasaei Sekeh, and Carla Brodley, and I will be introducing each in turn.

Terry Yoo: Our first speaker is Roy Turner and he’ll give us a brief statement and will then go through the other panelists. The format is that we will go through the other panelists. Each giving a brief statement and they’ll take questions and answers.

Terry Yoo: Roy Turner got his PhD at Georgia Tech University and later was a visiting research professor at the University of New Hampshire and Northeastern University before joining the faculty at the University of Maine. He is one of our most senior professors in the school of computing Information Sciences. He is also a principal investigator for a major research initiative program instrumentation program grant from the National Science Foundation. Roy, to you.

Roy Turner: I’ve been asked to give a bit of an overview of what artificial intelligence is and kind of frame the discussion a bit.

Roy Turner: AI is kind of misunderstood. In many ways, but what it is, is an area of computer science, whose goal is basically to give computers some abilities they currently don’t have, for example, to allow them to do things that require intelligence. So, for example, medical diagnostic reasoning, legal reasoning, planning, fabrication of objects, things like that, all of which can be done by AI, and to do the things that animals and humans can do very easily, but are extremely difficult for computers.

Roy Turner: So, for example, to examine an image and detect things in an image and to understand natural language and things that we can do easily with computers can’t.

Roy Turner: There’s a huge number of AI areas and techniques. But I’m not going to go through all those obviously, automated reasoning being one, which includes things like theorem proving, and planning and scheduling.

Roy Turner: Controlling intelligence like Tesla cars, things like that are drones, interpreting sensory information. So, for example, natural language processing, computer vision, are also in that category. Knowledge based or expert systems. You’ve probably all been under the scrutiny of an expert system at one time or another, when you had credit card transactions or you applied for a loan or something like that.

Roy Turner: And then the most burgeoning area at the moment, machine learning that a Salimeh will talk about in a bit.

Roy Turner: So, What can AI do, well web searches are one thing that we interact with on a daily basis. In fact Google is primarily an AI company. One of my colleagues is the Chief vice president for search quality out there and he’s got a PhD in AI, for example.

Roy Turner: Intelligent control mentioned that for drones and self driving cars, planning complex operations like getting material where it needs to be for wartime activities or planning factory things.

Roy Turner: Medical diagnosis, car diagnosis, things like that, natural language processing, all sorts of things that are becoming important. Sentiment analysis for example for social media to pick up on hate speech and that sort of thing, data mining for large pieces of data.

Roy Turner: The question arises why, since AI has been around for arguably 100 years but really since like 1956, why AI right now, why an AI initiative? Well, it’s a confluence of three things primarily. We’ve got great societal needs at the moment. Many problems facing us are simply beyond what we can do without some help from automation. So, for example, climate change, even the COVID-19 pandemic, all sorts of things that are coming up now that need significant amounts of resources and analysis. We also for the first time, have tremendous amounts of data, not only that needs to be processed, but that can be used in order to train artificial intelligence systems. And finally, in the last decade or so people have made tremendous advances in the algorithms behind machine learning. In addition, there have been advances on using the kinds of processors that we have in our computers. The GPUs.

Roy Turner: Which are basically just supercomputers to use those for our algorithms. And so this has all come together nicely in the last 10 years or so.

Roy Turner: To see another reason why this is the predicted economic impact of AI and millions of dollars, we’re right about here. If you can see my pointer. If not, we’re in 2020 and you can see it is expected to be an exponential growth curve for the impact.

Roy Turner: Venture capital which these people are not stupid. They put their money where they think you’re gonna get some reward, again an exponential increase. So lastly, why UMaine artificial intelligence initiative. Well, we’ve got active labs in AI research. For example,

Roy Turner: The SKAI lab, Sekeh lab, and the MaineSAIL lab. We have very good collaborators, most recently the Roux Institute which Dr. Brodley will talk about.

Roy Turner: There’s a host of applications on-going on campus already. These are all applications that people are using AI for on campus. There’s applications across the state as well. We have quite a bit of resources as Terry alluded to. We had a major research initiative/grant initiative that’s going to install a pedoflop GPU cluster, which that means basically quadrillions of instructions per second.

Roy Turner: Primarily for artificial intelligence. And finally, as part of Maine’s outreach mission. We’re uniquely situated to reach out to K-12 to begin to educate the workforce on artificial intelligence that’s emerging technology to help the industry in the state, the government, and fisheries, etc. All right, thank you, Terry.

Terry Yoo: Okay. Our next panelist is Salimeh Yasaei Sekeh. We are going from one of our most venerable faculty to one of our newest hires.

Terry Yoo: In fact, she is our newest hire in the School of Computing and Information Science and the computer science program. Dr. Sekeh is an assistant professor of Computer Science. And prior to joining our faculty at University of Maine. She was a postdoctoral research fellow in the CS department of the University of Michigan Ann Arbor. She has also had appointments in Brazil and Italy.

Terry Yoo: She is the Director of the Sekeh lab, focusing her research on designing and analyzing machine learning algorithms, deep learning techniques, and the application of machine learning techniques and real time problems. Dr. Yasaei Sekeh.

Salimeh Yasaei Sekeh: Thank you, Terry.

Salimeh Yasaei Sekeh: Okay. Good afternoon, everyone. I hope that you can see my slides. So I’m going to talk about advances in AI from an algorithm design and machine learning perspective.

Salimeh Yasaei Sekeh: Given the exponential rise of interest in AI, experts have called for major studies in developing AI approaches. In recent decades researchers have been working on designing and analyzing mentors that are capable of beating human experts in different fields.

Salimeh Yasaei Sekeh: However, the common way of AI brains prominent challenges all smart methods with low risk and competitive complexity.

Salimeh Yasaei Sekeh: This is because digital life demands acceleration and accuracy in processing this algorism. In Sekeh lab, we actually address these types of challenges.

Salimeh Yasaei Sekeh: So machine learning, specifically deep learning has become a massively important thing in recent decades and this is because of its application. It’s fascinating applications in many modern real world problems. So in Sekeh lab we advance deep learning techniques, we improve them. And we actually try to extend deep learning architectures, as the networks are extremely intensive, in terms of both computations and actually memory usage. We explore their applications in various domains and disciplines like computer vision, climate change, forest science, and so on.

Salimeh Yasaei Sekeh: In addition, in Sekeh Lab at University of Maine, we did a lot of AI techniques that make computers creative, like adversarial attack machine learning. But this is not only our mission in Sekeh lab. Well, we actually rely on AI and online machine learning approaches. We design time series metals that actually analyze the peak and massive streaming data. Million award of streaming massive data and time series algorithm and prediction play an important role in the future of AI. Therefore in Sekeh lab we design online streaming features selections techniques that are applicable in problems like autonomous cars or in video streaming or in the movie-streaming companies. And in addition, we actually focus on time series deep learning.

Salimeh Yasaei Sekeh: As new technology that supports larger scale streaming data and saves human lives, we leverage time series, the architectures, and explore the applications in mainly real world problems. And as a side project actually here at the University of Maine in Sekeh lab.

Salimeh Yasaei Sekeh: With my machine learning team and we’re working on COVID-19, forecasting locally and globally by using this time series deep learning architectures and with the hope that we will be able to help humans and save some lives.

Salimeh Yasaei Sekeh: Thank you, Terry. And I’m happy to take the questions.

Terry Yoo: All right. Thank you, Salimeh.

Terry Yoo: The University of Maine is also both grateful and happy to have Carla Brodley, the Dean of Khoury School of Computer Sciences at Northeastern University, joining us this afternoon. Prior to joining Northeastern’s team, she was professor with the Department of Computer Science at Tufts University, and before Tufts she was on the faculty of the School of Electrical Engineering at Purdue.

Terry Yoo: She’s a fellow of the ACM and a fellow of the Association for the Advancement of artificial intelligence. She has co chaired or been program co chair for triple AI, as well as the International Conference on machine learning. She is currently a board member of the CRA, Computing Research Association.

Terry Yoo: And she’s been a member of the advisory committee for the National Science Foundation’s director on Computer and Information Science and Engineering.

Terry Yoo: Please welcome Carla Brodley.

Carla Brodley: Thank you, Terry. First of all, I’m very honored to participate in this webinar today and in particular.

Carla Brodley: My background research is in applied AI and one of the things I love about the directions that you’re speaking about. And of course, the research, that really incredible research that we just heard about, is the fact that when you do applied AI, you have the ability to make a huge scientific impact in the field that you’re applying artificial intelligence and you also have the ability that through that application you uncover areas where AI has not progressed far enough to solve that particular problem.

Carla Brodley: I want to ground that with two examples from my own research. The first was with

Carla Brodley: York University’s epilepsy center where we were trying to find the lesions in treatment resistant epilepsy patients that were causing their seizures. A treatment resistant epilepsy patient is one where the drugs don’t work.

Carla Brodley: Now I didn’t know this prior to the collaboration. But it turns out that the only way that you can solve this problem is through a process called surgical resection, which is where you scoop out the bit of the brain that causes the seizure of course you have to check that it’s not a place that you really need and although I was invited multiple times to watch the operation I have declined in all cases.

Carla Brodley: So what did we do? Well it turns out that the lesions that caused this that are typically on the cortical surface and expert neuroradiologists can’t see these lesions. So we took a machine learning approach to this.

Carla Brodley: And we were able to get the efficacy of our ability to find the lesions up to 80% whereas before it had been 20% now if you can find the actual lesion, then the efficacy of the surgical resection goes from 29% to 66% and I can tell you if you’re telling me that I should have an operation on my brain 66% sounds a lot better than not 66% of obviously 100% would be better, but in creating this implying machine learning, we had to develop new machine learning methods that were then published in the AI literature that were applicable to other problems.

Carla Brodley: A second example, and one that I think will resonate with Maine in particular is we looked at how do we use machine learning to create maps of global land cover from satellite data. So what’s on the Earth’s surface.

Carla Brodley: This is a very challenging problem because you only have, a feature, a set of data to work with the satellite can produce, and we were able, through our application to figure out that the 17 land cover classes that have been dictated by the whole community were not actually as representative of what’s going on, in terms of what could be distinguished by the data. So, for example, agriculture really needed some finer distinctions for the different types of agriculture and the distinction between mixed forest versus coniferous forest was not able to be ascertained from the spectral data. So, and again we invented a new machine learning method that looked at how do we use both the raw data, and then the data that had been labeled, to come up with a new definition of land cover classes.

Carla Brodley: So I love this applied focus, it’s going to lead to wonderful inventions and basic machine learning and AI.

Carla Brodley: Now, on behalf of Northeastern University. I want to say that we’re extremely excited to join the Maine ecosystem with the Roux Institute. The focus of the Roux Institute is applied human in the loop artificial intelligence.

Carla Brodley: And in particular, we are really excited to build partnerships with AI faculty and then faculty from the areas of excellence in Maine, and I apologize if I leave any out, but such as forestry, the Climate Change Institute, composite materials, pulp and paper, and then of course all of the many initiatives around marine science that the University of Maine is so well known for.

Carla Brodley: We’re really excited to create giant government, industry funded research, and as part of this beginning of our collaboration we are hoping that we build up the synergy where Maine faculty might spend sabbaticals at the Roux Institute and vice versa, that we have visiting fellowships for PhD students in both directions, and that we really work collaboratively with you.

Carla Brodley: And of course it’s our hope and that the University of Maine will hire a lot of AI faculty to collaborate with us in these new initiatives and in particular, we’re really excited about applied AI for the earth and climate initiatives that you’re so well known for. Thank you.

Terry Yoo: Thank you Carla. So we’d like to open the panel for questions. If the panelists would please start their videos again and unmute their microphones I’d appreciate it.

Terry Yoo: We have a question right now online, this one from Arthur Fink, and from Portland.

Terry Yoo: The question that stated, granted that AI can be useful and powerful, but how does it earn the name intelligence as in artificial intelligence and does that naming invite unneeded fear and concern. I’ll give this question first, to our honored guest Carla Brodley.

Carla Brodley: I was actually gonna say, I thought Roy should answer that question better than I. Roy, I am passing it to you.

Roy Turner: Thank you. Carla.

Roy Turner: First of all, I’m sorry about the slides. I didn’t notice they weren’t moving

Roy Turner: It’s called intelligence, I think, are originally maybe with a little bit of hyperbole, like the you know the digital brains BACK IN THE 40s when people talk about computers, but it was the term was coined because we help to do some things that seem to require intelligence when people do them. As I mentioned, things like medical diagnosis or protein folding and things like that. So, some folks. Yes, are working on full fledged trying to create intelligent machines.

Roy Turner: But not many. Most of us are concerned with with adding intelligence to the computers, but I do understand your, your concern about unneeded fear and concern that certainly is part and parcel of AI has been since the beginning, but I think we’re beginning to earn at least a little bit of the sobriquet of intelligence and some of the algorithms were using now.

Terry Yoo: So I’m going to mispronounce the question or the person asking the question, but Ilia.

Terry Yoo: And I’m not gonna try the last name. How can current UMaine students get involved with Umaine AI, and in particular the Sekeh lab.

Salimeh Yasaei Sekeh: Thanks. I think this could be for me then.

Salimeh Yasaei Sekeh: It’s very easy. Just email me and then I try to do my best. And I usually don’t miss my email. So I get back to you as soon as possible. Just email me if you’re interested. And then we take from there for sure.

Terry Yoo: I think this question is also for Salimeh, but I’ll invite after her, invite the other panelists to respond. This question is from Lois Anne. How is AI being applied to our current pandemic? And I refer this one to Salimeh partly because one of your slides mentioned COVID on it.

Salimeh Yasaei Sekeh: Yeah, so with my ML team, we actually put together a team here at the University of Maine and we started to see what are out there and some Wiley is a on specially deep learning techniques that they could apply here in this type of pandemics. There are some non machine learning techniques that many people from a very well known universities are working on them, including Sears and it will started also from those techniques, but in a specific are we are trying to implement recurring neural networks.

Salimeh Yasaei Sekeh: Deep learning architectures to implement and these type of time series data, of course, it’s very challenging. And we have challenges, but we have also very interesting results. So this is a very ongoing process and hopefully the results will be out very soon.

Terry Yoo: Thank you. Anybody else

Roy Turner: If I have their support. Oops, I’m sorry. I believe there’s some work being done elsewhere as Salimeh indicated on the looking at finding drugs for treatment of Coronavirus and also some antibodies and might fit the virus. A lot of work being done on that, in general, I think people are applying that to the COVID.

Terry Yoo: A question also from the internet.

Terry Yoo: I’ll start this one with you Roy. But perhaps the other panelists can address it, if you wish, wish to pass it off. Can you address the issue of bias in AI coming from the fact that people make algorithms and each person has their own biases.

Roy Turner: I will quickly address this and pass it to Carla because she’s got way broader experience in this than I, not in bias, but in the AI.

Roy Turner: The one of the things with the old style AI or the original AI yes the algorithms do have a bit of bias in them. I mean everything does, but it’s not most of the things we’re doing is not something that would be biased one way or another. We’re coming with general techniques.

Roy Turner: Some of the newer things and Salimeh can also address this of that they’re not biased so much in their algorithms as much as they’re in the data that gets fed to the algorithms.

Roy Turner: A well known problem with the Microsoft’s online chat bot that became anti semitic because people were feeding data from the web that was anti semitic. So I’m not sure the best answer to address it, but I’m sure Carla knows.

Carla Brodley: Yeah, I actually Roy. I was trying to think about how I could add to that, so I thought it was an excellent answer.

Carla Brodley: The problem is not the algorithm. The problem is the data and data has two problems with it. The first is, what are the features that you use to define each data point. And if you don’t have features that are able to fully represent your data in a quick example of this would be if you’re trying to predict who has heart disease and all you have is age and hand size, you will correctly predict that older people with large hands will die of heart disease and younger people with small hands won’t. But that’s not what you’re looking for. And if you don’t have a good representative sample of the data, you might not even be able to do that.

Carla Brodley: So it’s the features that represent each data point and then it’s also the data points that you collec, a classic example of this is sentencing software that was used to predict recidivism rate, that predicted that African Americans should not be paroled because they were just going to recommit crimes and that was based on data that was completely the distribution was completely off because of who’s incarcerated in this country.

Carla Brodley: So these types of issues they now have their own conference. It’s very important.

Carla Brodley: The first principle of applying machine learning is that you have a representative data set from which to train your models that is the same as what she will apply to your models. And if it’s biased in some way and making those decisions and what was particularly egregious about the example that I gave, is that the company that created this algorithm didn’t explain why it was making these decisions. So just using machine learning as a box that you can’t see into is a mistake and that research also needs to happen on the explainability of the decisions that you’re making.

Terry Yoo: I think we’re. We have another question, but I think we’re gonna have to push on. I have one question for our distinguished guest from Northeastern University.

Terry Yoo: In the time that we have remaining, coming from the outside in particular.

Terry Yoo: What are the, we’ve already mentioned the Roux Institute is a new resource in Portland.

Terry Yoo: You obviously have faculty and laboratories here at the University of Maine. You mentioned some of our other work and you’re particularly interested in applied AI. I’m curious, what are the resources needed to take to expand Maine’s participation in research development in this area? We appear to have a good start. What, what more is needed?

Carla Brodley: So were I to analyze this, there’s so many different areas of AI which Roy did such a good job elucidating and obviously you’ve made a really spectacular recent hire.

Carla Brodley: What I would do if I was thinking about this was look at all of the areas of research areas where you think AI could be applied to see do you need computer vision researchers, do you need machine learning researchers, do you need natural language understanding.

Carla Brodley: My guess is that your number one priority will be machine learning, because even because machine learning can be applied to computer vision problems, but I would do an analysis to see where do you really need the most collaborators and then go after those areas first, because you’ll be able to write the joint grant proposals and also interface with industry.

Carla Brodley: And of course, I can say from just, well not quite done with hiring season still fighting really hard to get a couple more people on. It’s quite an interesting hiring year, I have to say.

Carla Brodley: Machine learning people are really, really, really hard to hire, but a lot of people who are hired into industry are working on how to better serve you ads and are kind of sick of this as a field. So the fact that you have these built in collaboration will really help you with your recruiting because if you were recruiting me if I was still research active and not an administrator, I will be extremely excited if on my interview I could meet with the people that I would be collaborating with in these really critical and important domains that are particularly germane in terms of global warming.

Terry Yoo: Right.

Terry Yoo: Thank you very much. We are running out of time in our panel. And we do need some time to transition to the next panel. Thank you to the panelists. Dr. Turner first, of course, Dr. Yasaei Sekeh. And, of course, to our honored guest Carla Brodley.

Carla Brodley: Thank you for having me. Bye bye.

Terry Yoo: Our next panel is being led by Ali Abedi.

Terry Yoo: Ali is going to be running a panel on the applications of AI in business, industry, and government, healthcare, and environment, which is quite a mouthful. For the other questions that are where I saw online. I’ll do my best to answer them or pass them off to my panelists. Thank you very much. Ali.

Session 2

Ali Abedi: Thank you, Terry for running the first panel. I would like to tell attendees, if you liked this first panel we have three more for you to go so I stay around.

Ali Abedi: And we’ll continue with the next one. So the application of AI in business, industry, government, healthcare, and environment is the next panel.

Ali Abedi: I will start out with a few examples of some of the research that we have been doing at UMaine starting from a space applications.

Ali Abedi: In the past six to seven years to multiple different projects funded by NASA we flew the first artificial intelligence software to the international space station.

Ali Abedi: All hardware was built by University of Maine students. They were all tested by NASA and flight certified by Boeing, we launched this to space to find tiny leak vocations inside International Space Station by listening to the ultrasonic sound.

Ali Abedi: What a postdoc to Ph.D. student to master student several publication Invention Disclosure came out of this, so the point I want to make here is that solving problems is one part of the day I work as you’re doing here. But the most important mission at the University of Maine is to actually educate the workforce and educate the students for the next phase. So look at the next slide.

Ali Abedi: We look at the healthcare applications, University of Maine in collaboration with a startup company spin off out of UMaine, Activas Diagnostics and Northern Light Health, one of the largest healthcare systems in Maine. It may have been working on using artificial intelligence for detecting Alzheimer’s disease and related dementia. So recently, after almost a decade of prototype development, the past two years, we have had the National Institute of Health contract that because that hundred patients, it’s mild cognitive impairment pre-Alzheimer and maybe a little bit deeper into Alzheimer’s disease with wireless sensors embedded into their mattress pad. So it was their home care sort of technology that was being tested on hundred patients trained in the system by physicians at Northern Light hospital. We were very excited to see over 95% accuracy when they can detect that your sleep pattern that’s related to Alzheimer’s disease.

Ali Abedi: Next slide is an application of AI environmental scenarios. This is a new project funded last year in the next three to four years we’re going to build instruments for the forest ecosystem with different kinds of sensors for measuring soil moisture, measuring snow depth, measuring carbon cycle, and all that and they will help us in different ways throughout the year, to be able to actually create wireless sensor networks that can operate throughout the entire year, not just for a few months because of their efficiency in the wireless channel. We also provide this data to the foresters for monitoring the forest ecosystem. This is the National Science Foundation project in collaboration with the School of Forestry Resources, University of New Hampshire and University of Vermont.

Ali Abedi: And the next slide basically talks about business and industry application. We are working with a Maine-based industry company called LDE electronics that has one of the largest contract with quick service restaurants and we’re trying to help them with building multi type sensor networks that they can detect with much higher accuracy than current systems, the cars and different kind of trailers coming in the drive through of different restaurants and we’re going to apply the same methodology for automobiles detection for self-driving cars, etc. So don’t want to take too much time. We just want to kind of give you some overview of things that are going on at UMaine in the past decade or so in applied AI and electrical and computer engineering department and without further ado, I would like to pass the mic to our first panelist. Dr. Kay Aikin, cofounder and CEO of Introspective Systems to talk about her research. Kay, take it away.

Kay Aikin: I don’t think my video is on yet. By the way, Ali, thank you for the promotion, but I am one of the few non PhDs here. I have a Bachelors of engineering in energy systems.

Kay Aikin: So I’m going to do a really quick summarization of research that was mostly funded by the Department of Energy, but also a foundation in Israel, called the Bird Foundation, which is a joint collaboration between US companies and Israeli companies. Next slide.

Kay Aikin: So, a lot of talk has been about most of this world wanting to get to renewable energy. Maine now has initiated by Governor Mills to by 2045 to be 100% renewable and that’s just not 100% renewable in the …

Kay Aikin: My video is apparently not working. Thank you.

Kay Aikin: …is 100% renewable across the entire energy use. So, that is, including the electrical system, transportation, process heat, and space heating for our homes and businesses. This is actually making the electrical infrastructure, very, very complex. And one of the ways we can solve this problem is using artificial intelligence, the work we’re doing with the Department of Energy is to design a new architecture that can be rolled out in the entire United States.

Kay Aikin: The advantages to it is if you try to do a centralized system, you would end up with so many separate agents so many individual nodes that you would have to control that the computing power required would be astronomical and be unable to be able to solve the problem.

Kay Aikin: So we’ve actually building an architecture using artificial intelligence to split the grid up into many, many small cases. So you can think of one small area would be your house. And then the next area would be your neighborhood and the next area would be from your substation down and then it would be a part of the state. And then the entire state.

Kay Aikin: And this is actually how ecosystems work, ecosystems are all autonomous individual agents that work together to solve a global optimization problem and that is living. So the idea is biomimetic, it requires distributed intelligence. And in this case, distributed AI has aspects of being adaptive just like an ecosystem and is also fractal, meaning multiple layers. Next slide.

Kay Aikin: So the biggest problem in this area is how do you coordinate the control. How do you not have runaway conditions working across the entire system.

Kay Aikin: Some people might remember the 2003 blackout in New York City over most of the Northeast, where a single one error caused a blackout over 10 or 12 states.

Kay Aikin: So what we’ve done is actually use brought two areas together, control theory and economics into one type of algorithm that is able to use market based constructs market based economics to actually manage the flow of energy, energy and this idea uses the sub optimization of breaking the electrical grid up into multiple layers and then pricing gateways that actually price power at the local area based upon the scarcity or amount of power that’s available. Next Slide.

Kay Aikin: Very quick review some of the control theory, people will recognize this. This is actually called the Bellman equation. This is a subset of that using the idea of adaptive dynamic programming, which was formed early in the 1980s, our particular special sauce is we have both online and offline learning that are continuing to learn and evolve on the grid as they’re working.

Kay Aikin: This class of algorithms are very, very good at solving multi dimensional problems that are up till now had been unable to be solved. One of them is a prime idea that a lot of AI researchers have talked about is a traveling salesman problem of having 100 places to stop at and finding out what the quickest route is to do that. As quickly as possible.

Kay Aikin: This is actually a very, very hard problem to solve. And these classes of algorithms have worked on solving this. This form of ATP allows that consistently allows the system to evolve as time progresses. Next slide.

Kay Aikin: This is actually a real world example of the research. Some of you might have heard about a microgrid being developed on Isle a Haut.

Kay Aikin: And in that case, we’re using a 300 kilowatts solar array A one megawatt hour storage facility of battery, as well as over 20 heat pumps that actually help balance the grid and what is driving the decision making for those devices is actually a price signal and that price signal prices power in real time using AI to try to balance the system so there’s enough power at all times.

Kay Aikin: So when power is scarce assumption will go down. The pumps will start turning off and production will go up and a battery will provide more power and when power is abundant consumption will go up. Basically shifting power use to a different time and production will go down. So all that will go with the questions.

Ali Abedi: Thank you very much. Kay for presenting these slides. So let’s move on to the next presenter, Dr. Sepideh Ghanavati is an assistant professor at the School of Computing and Information Sciences at UMaine. Sepideh, please.

Sepideh Ghanavati: Hi, so we have privacy engineering regulatory compliance lab at UMaine that we deal with different aspects of protecting privacy of individuals at the age of AI.

Sepideh Ghanavati: And we deal with basically making sure that the applications don’t collect and use or process personal information without the users consent. Next slide.

Sepideh Ghanavati: So one of the advancement of AI size like mobile applications, they collect lots of their massive amount of information from the users and a process those betas and lots of times they might violate privacy of individuals and, for example, they might collect record the audio without their permission or they might for example, collect data collection and location without like you even know it. And what we’re doing at the Privacy Engineering – Regulatory Compliance Lab (PERC_Lab) is trying to solve these problems of protecting personal information, even though the applications using process those information for the purposes. Next slide.

Sepideh Ghanavati: However, protecting personal information is not easy. One of the factors is that governments impose those on developers that they need to give notices to the individuals that, what are they, collecting what they’re doing and how their applications are using these data. To create this type of notice and you need a lot of effort, because …

Sepideh Ghanavati: First of all, you need to have some legal backgrounds and getting legal experts are very expensive and the applications they constantly updated the laws also getting updated and it’s not very easy for developers to match their application behavior with with what actually they give notice to the user and impersonal arbitrary try to resolve these problems by developing some frameworks that basically tries to instead of just like riding our privacy notices tries to translate directly what happens in the applications into national language statements related to privacy. So instead of that, even developers use some privacy generators that they just create inconsistent and generic application. We try to resolve this problem. Next slide please.

Sepideh Ghanavati: And one of the projects that we have is basically developing a recommender system that goes from the code segments of the applications into some statements like what you see on the slides just very simple statements that users and that the developers, they will understand. Next slide.

Sepideh Ghanavati: And as I said, For this we are using the AI techniques and we are also protecting the data. So the advancement of AI. We’re still protecting the user so that they have photos, the balance between getting benefit from their technique sets I offer and also protecting their individual privacy and. Next slide.

Sepideh Ghanavati: So we have several projects like one of them is related to Android application. We are also dealing with internet of things and blockchain to protect the individuals privacy in healthcare in and smart home environments. And we are also looking at the regulations to make sure that these privacy statements are also compliant with the regulations and we have four phases for this project and several of our PhDs students and undergraduate students are dealing with this project. Next slide.

Sepideh Ghanavati: So the main contributor at the moment of the project, the senior policy research scientists from Google. We have two PhD students involved in the project and several undergraduates are also working to develop such application to protect individuals’ privacy.

Sepideh Ghanavati: Thank you.

Ali Abedi: Thank you very much, Sepideh. So the next panelist is Dr. Tim Liu, Alfond Associate Professor of business analytics from Maine Business School. Tim.

Tim Lu: Thank you. Ali. I would like to add some comments on this important topic from the business perspective in two different aspects.

Tim Lu: One is a business application of AI and the other one is a philosophy of using AI in business. So first at the application side, we can classify, you know, in general all AI business applications into three categories and each serves a specific objective.

Tim Lu: So first is the rule based  machine learning to support high level business decision making, using transparent and accessible algorithms, like regression or tree-based algorithms that requires significant human intervention. So here it is very important to point out that the rule refers to business rules, not any mathematical rule and, secondly, we have purely data driven machine learning to efficientize low level business process using neural networks or deep neural networks based algorithm that depends less or late who human intervention.

Tim Lu: And the third is the automation to optimize business operation and the production by replacing human labor with robotics. I believe the most recent example is automation in meat processing.

Tim Lu: And on the philosophy side, I believe.

Tim Lu: I believe the fundamental reason that AI technologies are playing and will play much more enhanced the roles in business is because we are in the so-called assert AI wave, the key feature of the third AI wave is that it is powered by high performance computing infrastructure and big data.

Tim Lu: So because business is actually all about human behavior, which is probably the most important lesson we learn from the culminating pandemic.

Tim Lu: So in the business environment. It is not really about artificial intelligence at all. It is really about collective intelligence of human and machine.

Tim Lu: So that, for example, human is better at the thinking and the perceiving and machine is better at predicting and the recognizing. So there are two key questions for anyone who wants to introduce AI technology into their business. First is, what work should machine do and what work should a human do?

Tim Lu: So the other question is how to optimally integrate the human work with the machine work together. So it is up. Why did the upgrade in business researcher community?

Tim Lu: Conceptually to serve the three different types of applications of AI in business. AI can play the role of tool assistant peer and the manager.

Tim Lu: So essentially what we really want in business is a bi directional relationship between human and the machine so that machine can better support or manage human work and human can make machine work better. So that is all for me today. Back to Ali. Thank you.

Ali Abedi: Thank you very much Tim. Our next speaker is Dr. Auroop Ganguly from Northeastern University engineering and computer science.

Auroop Ganguly: Thank you very much. I think my video needs to be started from the other side.

Auroop Ganguly: Thank you. Alright, so once again thank you for this opportunity. I am at Northeastern University where I lead the sustainability and data sciences laboratory. I’ll talk a little bit about what research we do there in terms of this particular panel.

Auroop Ganguly: One of my former PhD students has started this climate risk analytics company, which focuses on climate change adaptation through data analytics and AI, especially in the urban sector, they started as an NSF SBIR project and then now it has been embedded within one large exchange. So with that, if you could move to the next slide please.

Auroop Ganguly: Thank you very much. So yes, thank you very much. So the case I’m trying to make here is that when you think about climate. When you think about the science of climate and adaptation and the implications.

Auroop Ganguly: We are talking about inherently coupled, complex systems. So here is just one example from the country where I was born and grew up in which is India where, you know, there is a lot of dependence on the monsoon. So in 2012 there was delayed monsoon and extreme heat waves.

Auroop Ganguly: Which together caused a surge in agricultural water demand and because of various policies that have been put in place with all good intentions.

Auroop Ganguly: But then the way this sometimes where they have unintended consequences. So there’s an extreme surge in agricultural water demand as well as electric pumps that were put in place.

Auroop Ganguly: And that led to increased stress on the power grid and what that cost is the 2012 India blackouts, which is the largest blackout ever in terms of population impacted.

Auroop Ganguly: And that, in turn, because of the way the power system impacts railways that in turn or signals and systems that in turn impacted one of the meteor lifelines of India, which is the railway network. So right here we have going from the natural system of climate and weather and monsoon and connecting with both policy and human behavioral issues in terms of surge in agriculture water demand.

Auroop Ganguly: Use of electricity and then go into Engineered Systems basically power grid and railway networks. So just shows how interconnected we are when you think about climate and adaptation. If we go to the next slide please.

Auroop Ganguly: So the kind of work we have been doing there is in the corner is on the climate side use of machine learning for weather extremes and on the engineering and infrastructure side, the impact side, looking at critical urban lifelines interconnected critical R1 lifelines. The one common theme is what some people have called domain aware machine intelligence. So on the climate side with complex spatial temporal systems. We have been looking at physics guided machine learning.

Auroop Ganguly: And with the critical infrastructure side, we have been looking at network science and engineering, which are informed by novel insights and principles from engineering and policy so broadly the theme here is machine intelligence, but which are domain aware. If we can go to the next slide please.

Auroop Ganguly: So one work that we have also done is, I mean, we have been working with the city of Boston climate ready Boston report, the last version. Some of my PhD students right now are working with the next version of the report. We have worked with the town of Brookline Massachusetts in dealing with public health impacts of urban heat waves, looking at risk exposure vulnerability, as well as looking at adaptation and mitigation aspects. Next slide please.

Auroop Ganguly: And we have also looked at climate risk in urban areas with, you know, for example, looking at sea level rise, and what that means in terms of urban floods and then connecting that with assets at risk.

Auroop Ganguly: And looking at recovery models with network signs such as after Sandy in New York City. How did the New York City MPA the mass transit recover and what could have been done in terms of systematic recovery principles. This was an article that we publish in climate 2020 it’s the United Nations Association report. If we go to the next slide please.

Auroop Ganguly: These are all the students and postdocs who have helped in many of these efforts. Thank you very much.,Back to you Ali.

Ali Abedi: Thank you very much Auroop. So at this point to open it up for questions. So I would like to ask all the panelists to turn on your videos and unmute yourself. And I’m going to go over the Q&A service to see what kind of questions we have.
Ali Abedi: So the first question. It’s coming from Carissa from University of Maine. She’s asking what sort of resources will be available moving forward for labs that want to apply neural networks.

Ali Abedi: I can answer this question. There is a plan to basically have a series of seminars related to different aspects of the AI and machine learning application entity side that we talked about today in the fall. So we’ll get much more deeper into these.

Ali Abedi: In future, and I would also recommend to reach out to any of our panelists in this webinar to start, you know, for me, and collaborations like that. So I’m going down the list for, for the question. I have a question for Kay.

Ali Abedi: How does decentralizing the grid affect susceptibility to cyber attacks or natural disasters? Kay.

Kay Aikin: That’s a good question. So one of the innovations that we’re doing with our transact with the term we use is transact of energy which is the idea of using economic systems to manage the grid. In our case, the system is naturally cyber secure, not only because it’s fractal where particular. It’s called encapsulation in the computer industry where you can actually isolate parts of the grid from the other parts but also in our particular algorithms we strictly use a downward facing pricing signal.

Kay Aikin: So a price is sent from an upper node to a lower node and all, does it say right now, the price of power is 12 cents a kilowatt there is no upward communication. So that makes the system very, very secure, it does make it a little more fragile. In other words, if you do have cascading, it can propagate. But we, the idea of fractalizing the grid mitigates that problem. So it’s idea of when reward of rewards versus penalties and you’ve tried to balance that.

Ali Abedi: Thank you, Kay. So next question is from Eileen from Maine Geospatial Institute. So the question is that, what are some other methods being used to build Maine’s Resource Network to help us all tap into available expertise for shared projects and programs?

Ali Abedi: So anybody want to take that.

Ali Abedi: So I will try to kind of briefly answer this, at least from the University of Maine Orono point, we have created UMaine AI initiative. So we have a website that basically lists all the faculty who are involved with this research, all different projects and we’ll definitely be happy to serve as a hub for this. And of course there is. I think a lot of activity also happening at the Maine Business School that you also mentioned. And I think Tim, can, can you also comment on how your business sort of connections with Maine businesses can help people get connected to the network?

Tim Lu: So actually currently the MPs, our Maine Business School. We have recently added one new concentration in business analytics. We are also in the process of proposing a new Master of Science Program in business analytical and the warehouse, we actually

Tim Lu: Closely working with a couple of local business in Maine to be able to the two programs. For example, I have been working ways. I’ll be and the Bangor same is bad to provide opportunities for current and future students like internship and or some real pieces projects provided by these two companies. So, we will have a lot more to do in future.

Ali Abedi: Thank you very much. So I just want to mention that those questions which do not get answered during this live session. We will provide answers on our AI initiated website, along with the videos and slides of these presentations. I would like to now thank all the panelists in this panel and hand off.

Ali Abedi: To the next panel moderator. Dr. Penny Rheingans, director of school of computing Information Sciences at University of Maine. Penny.

Session 3

Penny Rheingans: Thank you Ali. This next segment will turn to the needs and resources for education and workforce development to support the growth and AI capabilities.

Penny Rheingans: I’m Penny Rheingans, I’m director of the school of Computing and Information Sciences here at UMaine. I’m pleased to be joined by Jason Judd, director of Educate Maine.

Penny Rheingans: Sharmila Mukhopadhyay, director of the UMaine Frontier Institute for Research and Sensor Technology and Walter Rawle, President of the IEEE Maine Section.

Penny Rheingans: We will each give a short statement and then we’ve left plenty of time to answer your questions at the end. So please use the Q&A box to send us your questions. Jason will lead off with some thoughts about K-12 education.

Jason Judd: Great. Thanks, Penny.

Jason Judd: Great. Hopefully everybody can see my slides now and we’ll go ahead and get started. I’m excited to be here. I’ve already learned a lot from this conversation so far and I was asked to talk about what’s happening in K-12 education as it relates to artificial intelligence as Penny mentioned, I’m the executive director of Educate Maine and Educate Maine is a business slide education advocacy organization which is focused on increasing educational attainment and career readiness.

Jason Judd: We facilitate a program called Project login, which is really about building Maine’s tech workforce, working with K 12 schools, higher education institutions as well as employers.

Jason Judd: So we’d like to, we like to kind of work with everybody who’s supporting this work, you know, moving forward. So what I wanted to do is give you a sense of what’s happening with artificial intelligence right now in Maine and what the K 12 system looks like.

Jason Judd: So I wanted to pull together this nice graphic, which really describes the five big ideas in K 12 education as it relates to artificial intelligence, because I think you can naturally kind of think about, you know, what are some of those building blocks in addition to sort of a strong Math and Science Foundation and Computer Science Foundation.

Jason Judd: Where students are beginning to learn what AI is and teachers are beginning to learn what AI is.

Jason Judd: And learning some of those skills, and you can see that some of the skill and knowledge, you can, you can certainly learn in computer science classes, but you can also learn in classes like, you know, social sciences and engineering and science and math, etc.

Jason Judd: I wanted to talk a little bit more about computer science education here in Maine to just remind folks kind of what the climate looks like.

Jason Judd: In Maine, about 40% of our schools have computer science offerings. And certainly that’s a number that we’re trying to increase collectively, so that all Maine students have computer science instruction in the K 12 system, so that they can get excited about potential majors that relate to this area in the higher ed system.

Jason Judd: So there’s a number of different initiatives happening. One is the CS for Maine coalition, which is a whole group of organizations working together on policy, as well as on teacher professional development to get computer science into more schools, and to help train more teachers on how to teach computer science.

Jason Judd: The Department of Education here in Maine has also worked on a computer science education state plan that really maps out what the next couple of years needs to look like, for Maine to continue to scale up computer science activities and then we have a number of active teacher professional development programs. Some through the University, some through nonprofits.

Jason Judd: That are teaching teachers that are brand new to computer science, how to be computer science teachers and certainly a lot of that different curriculum has not only building blocks you know that connect to artificial intelligence, but specific lessons that students can do to explore this particular topic.

Jason Judd: Also our code activities and other sort of short term lessons are one way for students and educators to learn what artificial intelligence is all about and how it connects to the work they’re doing in K 12 schools. Broadly, we have more work we need to do in this area to make sure that our young people have a good understanding of AI in computer science before they get to the higher education system.

Jason Judd: So I also wanted to talk really briefly about something that’s in common with both the K 12 system and a higher education system which is a reliance on internships and experiential learning activities.

Jason Judd: We want to make sure that the young people can explore these skills directly with employers in addition to in the classroom. We run a program called the Focus Maine Intern Experience.

Jason Judd: With more than 600 interns that come to us during the summertime and where the convening organization for them, and many of the students are in computer science related internships in Maine.

Jason Judd: And what we hear from employers is the more experience, knowledge and interest young people have in Artificial Intelligence, the more marketable they are to getting those on competitive internships. so among our employer community that we work closely with there’s certainly interest in the expansion of these experiential learning opportunities, both as students finish high school as well as move ahead to their college experience as well. So I couldn’t present without really emphasizing partnerships, which is you know, this particular presentation this afternoon I think it’s a good illustration of kind of leveraging a lot of different folks with different expertise from all perspectives and and you know, making sure that, making sure that you know we can really all work together on this particular topic. So there’s a couple of good things I want to share, which is one simply information sharing is really essential.

Jason Judd: Connecting formal partnerships with both K 12, the higher ed institutions, as well as the employers. Working together on teacher professional development and using sort of joint assets and expertise to be able to leverage these opportunities and apply for particular grants together.

Jason Judd: As well as really convening hopefully in person in the future. The Computer Science Teachers Association will be coming to USM in October, for all of all of New England. So that’s a great opportunity for us to do some work together and do some conference sessions where we’re able to put a spotlight on what’s happening here in Maine, as well as collaborate with our new england peers.

Jason Judd: And with that I’ll turn it back to Penny.

Penny Rheingans: Thank you Jason!

Penny Rheingans: And I’m hoping our hosts will turn my video back on. Thank you. So I’m building on the foundation of what Jason’s talked about. I want to move on, talk a little bit about how we can follow up on that foundation in higher education.

Penny Rheingans: At the undergraduate level, cultivation of AI skills and expertise are really part of a broader base, that must include courses and program design and development, statistical foundations, electives in AI and supporting technologies, and applications to real problems needing AI solutions.

Penny Rheingans: The UMaine program, undergraduate program sort of best embodying that is the Bachelor of Science in computer science. This a bed accredited program provides a foundation in that program design and development in systems, and fundamentals, mathematical foundations on ethics and it culminates in a two semester Capstone that emphasizes professional practice and the skills that students will need to survive or to excel in the workforce.

Penny Rheingans: It also offers electives and AI related areas like AI machine learning, computer vision, cloud computing, and privacy. Alternatively, students can enhance a major and pretty much anything, with a minor in either computer science or statistics.

Penny Rheingans: Graduate programs offer a greater opportunity to specialize and to develop AI skills and expertise. Three main program degree programs have substantial AI content.

Penny Rheingans: The first are the Masters and PhD programs in computer science, which help develop the background needed to innovate in AI technology and applications.

Penny Rheingans: A cluster of masters, PhD, and certificate programs in the spatial informatics area allows students from a wide range of backgrounds to develop the ability to apply this new technology to analyze spatial data.

Penny Rheingans: And finally, the Maine MBA offers a new concentration and business analytics that Tim just mentioned.

Penny Rheingans: That trains students in processing and analyzing large scale business data to extract valuable information and then discover meaningful business knowledge to recommend optimal business decisions.

Penny Rheingans: Additionally, many UMaine graduate programs in other disciplinary areas offer specialized courses or course topics or project opportunities that apply AI tools to challenges in that field. These disciplines range from math and science, to engineering, to the social sciences.

Penny Rheingans: Three new programs under development will offer even more opportunities for students who wish to acquire the background needed to join the AI workforce in the future.

Penny Rheingans: A proposed minor in business Intel information systems will bring business analytics to the undergraduate level.

Penny Rheingans: A proposed master science that Tim mentioned in business analytics from the main business school will increase graduate opportunities in this area.

Penny Rheingans: And finally, a cluster of proposed graduate programs and data science and engineering will be accessible to students from a wide variety of backgrounds.

Penny Rheingans: These programs build on foundations in program development, statistics and systems, and address themes in data collection techniques, data representation and management, data analytics, including AI data, visualization and human centered computing data security preservation and reuse. The initial application area domain specializations will include spatial informatics, bioinformatics and Biomedicine business information, social and behavioral data science and engineering analytics.

Penny Rheingans: And with that, I’ll hand the floor to Sharmila, who will talk about AI workforce development.

Sharmila Mukhopadhyay: Thank you, Penny. Let me share my slides. Hopefully you can see my slides. What I will talk about are a few areas related to workforce development and I’m from the Frontier Institute of Research in Sensor Technologies. So I’ll talk a little bit about understanding the broader ecosystem for AI, and stay with one application example, which is on everybody’s mind right now, pandemic response and make a few comments about the education and workforce.

Sharmila Mukhopadhyay: So, if we think of the broader AI ecosystem, a lot of times what we we are thinking off, is there is an application area which could be anything, which AI is going to be applied to, and the ones I’ve marked in red are the ones I see very pertinent to UMaine, and then you have sensors or other data collection kind of agencies, which is the perception function, you collect all that and then you have the brain component of AI, which is like maybe layers of learning and reasoning.

Sharmila Mukhopadhyay: And you did hear a lot beyond simple algorithms, you have the machine learning and the deeper learnings, and then that creates some logistics of how to respond. Maybe through another set of hardware components like robotics or actuators etc. or decision making business decision making, improved application.

Sharmila Mukhopadhyay: So with that kind of a background, like I said I’ll stay with one example, one subset of all the application areas, one subset of healthcare is pandemic response.

Sharmila Mukhopadhyay: So if you look at where AI is helping in pandemic response. Again, that’s a huge area. Helping right now and can help in future, let me just tell you know state with a few one could be the testing one would be drug and equipment, you know, sort of development. The other is contact tracing, supply chain, etc. And again few topics are marked in red. These are the areas where I myself, and some of my colleagues and the Frontier Institute, are involved in.

Sharmila Mukhopadhyay: And just to give you an understanding of what the workforce needs to understand even within a given topic. The work could be at the research level. So this is like a bench research of, you know what I would do in my lab is trying to compile or design a hardware component for sensing different aspects, and then AI would sort of help us maybe quicken this process and then go into some clinical level downsize what’s important and then collect the data from that and provide the feedback loop. So this can happen in different specific applications.

Sharmila Mukhopadhyay: So based on these applications, I wanted to spend a little time on what would the workforce, AI workforce, what do we think would be useful to succeed.

Sharmila Mukhopadhyay: So just to give you an example. These might be two areas which we hear a lot about, how AI is currently helping and can be improved for the pandemic response application, and the immediate need VC, is that AI professionals need better understanding of the application.

Sharmila Mukhopadhyay: The application professionals also need to better understand the AI professionals, how they’re thinking or what the algorithms can do cannot do.

Sharmila Mukhopadhyay: So my suggestion is, and I think a lot of you know areas, it’s important that we create application specific modules within the education system and those should be collaboratively developed by faculty from both sides and then going forward, the mantra for effective education would be cross disciplinary training.

Sharmila Mukhopadhyay: And there are several approaches for that may be specific courses, government projects, you know, students teams. So this could be, this is just an example, I’m throwing out facial recognition is a known AI component fever detector is known sensor technologies, depending on how simple to how complex that is, You can combine to have this design that disease surveillance kind of, you know improvements in your future AI.

Sharmila Mukhopadhyay: Without taking much time, I wanted to end with one need I feel we need to instill among AI professionals is the critical thinking part, because they perhaps would be better than others to quickly evaluate the credibility of the information that is generated by, or manipulated by machines.

Sharmila Mukhopadhyay: So that would be a very important component of the AI workforce. So with that, I would hand it over to Jason.

Sharmila Mukhopadhyay: Thank you.

Walter Rawle: Good afternoon, ladies and gentlemen, thank you to the University of Maine, for inviting me to participate in this presentation.

Walter Rawle: My name is Walter Rawle. I’m the chair of the IEEE main section. Recently I’ve launched a small startup in the space of artificial intelligence and autonomous systems.

Walter Rawle: And during the day, I work at General Dynamics Ordnance and Tactical Systems in Saco, and it provides a very unique opportunity for me to see this domain from an industry perspective. In the few minutes I have. I’d like to chat a little bit about tangible economic impact of artificial intelligence and some specific pursuits I’ve been involved in. And then from a workforce pivot perspective, I want to talk about two things. First, the domain of available resources for an existing workforce that is interested in pivoting, and then secondly to investigate some fundamental ideas around what research and development AI workforce needs to be equipped with.

Walter Rawle: So to talk about the economic opportunities. I’ve recently been involved in some solicitations from the US Navy. There are two that I wanted to highlight here. The first one is around naval depot modernization and sustainment.

Walter Rawle: The US Navy of course has got billions of dollars tied up in sustainment activities. Most importantly is logistics and maintenance around condition based maintenance for agnostics and health management, etc. And the slide top left indicates the application of artificial intelligence to that domain, but more importantly for those who are involved with possibly VIW, or activities associated with the marine industry in Maine.

Walter Rawle: The photo on the right is a screen capture of a Trident nuclear submarine missile.

Walter Rawle: And recently, the US Navy launched the solicitation looking for machine learning based take data analytics for the autonomous navigation of these.

Walter Rawle: To give you an idea of the economic impact, the US Navy is number one and its pursuit these days ,as indicated. The Colombian class submarine is the highest priority development to replace the Ohio class submarine, that serves as the platform for the Trident.

Walter Rawle: The Colombian class submarine will cost $30 billion per copy and this really is a sizable sum by any one’s imagination. Each Trident missile is worth $30 million. And, of course, a significant investment in the nation’s national security is first and foremost today.

Walter Rawle: So workforce education. I like to differentiate workforce education along two domains. First of all, we have those that are interested in AI applications.

Walter Rawle: And I would like to pause it, the idea that there are significant resources available for any possibly mature or even somebody at a school for a few years, who has been involved in some other line of work and who wishes to pivot into the artificial intelligence/machine learning workflow.

Walter Rawle: People like Andrew Ang, Sebastian Thrum, Lex Friedman and others from Stanford and MIT, for example, and those who are involved at Google, Google mind, Google Brain. They’ve all provided excellent resources, TensorFlow, Pytorch, deep.learning@mit.edu and introtodeeplearning.com from MIT, are all excellent resources.

Walter Rawle: There are what we might call some renegades at fast AI and open AI that all have excellent training materials available. These are all free of charge. You can actually execute code snippets using the Google code lab under the the tutorials available from TensorFlow

Walter Rawle: I just would encourage everybody to think about applying to some of these opportunities.

Walter Rawle: Finally, in the area of workforce education for artificial intelligence…

Penny Rheingans: Thank you Walter, we’re going to have to take any questions. I think we’re going to save that.

Walter Rawle: That’s fine, thank you very much.

Penny Rheingans: If, if all of the panelists can now turn their videos and their sounds back on, we’ll take some questions from the Q&A. So if you have a question, be sure to type it in.

Penny Rheingans: The first, I’m actually going to read briefly and then pitch to the next panel and this is asking in parallel to technical advances in computer science education, what’s being done to explore some of the ethical aspects, which is the whole next panel so stay tuned for that. The next one I think makes sense for Sharmila and this is what AI approaches have been used in health care diagnoses.

Sharmila Mukhopadhyay: There are quite a few of the AI community, I think I could see some in the slides. It’s like a huge, big area where, for instance, you know, even trying for the drug development let’s say. You you’re looking at the entire data you have available for the genomic sequence and then trying to piece from that, what would be the best you know, best sort of sequence for a given, for instance, even within, if you look at SARS, there’s an entire you know data set on the different covert viruses and how the sequences workout.

Sharmila Mukhopadhyay: And a big area is trying to model from that and AI is helping with that is trying to model and come up with the best protein.

Sharmila Mukhopadhyay: Even for developing the sensors which would be the best antibody, which can give you the best detection without confusion and specific kind of system. So those are all where our AI colleagues are helping us, and I’m sure you can come up with many more of those. I don’t, I can see Walters light on, so I don’t really feel, you know, if you have more to add, but this is a huge area actually and we are in the process of trying to even see for instance, even to get a signal, you know, you can get 10 different kinds of signal. How do you sort it down and which is the best signal to us in the field of remotely, you know, monitoring, things like that.

Penny Rheingans: Thank you Sharmila. This one’s for Jason, what are the biggest challenges for K12 education to be able to prepare students for AI futures?

Jason Judd: I think the biggest challenges certainly are, there are a couple. One is, is resources and making sure that with such a large state with so many different districts and local control, availability to scale up teacher professional development and develop kind of long term high quality, teacher pipelines for computer science instruction, and then certainly I think we’re still figuring out where computer science in K12 kind of fits into the school day in terms of time and what their priority looks like and, and when it’s appropriate to integrate into other courses and when it’s appropriate for standalone offerings. But the good news is we’re making some really good progress working really collaboratively with partners. And I think we’re going to get there. Over the next couple of years.

Penny Rheingans: Thank you. Walter, can you say something about what auxiliary skill sets are required to support AI development?

Walter Rawle: So in addition to basic machine learning and the core computer science stuff, data is huge for artificial intelligence.

Walter Rawle: Data Science practitioners, I think are very important, data gathering and then associated activities, instrumentation, hardware engineers are very important in any of these domains. So good electrical engineers are terrific for artificial intelligence.

Penny Rheingans: Not that there’s any bias in that opinion.

Walter Rawle: Not at all. Not at all.

Penny Rheingans: Thank you. Sharmila, can you, let’s let’s turn it inside out. Can you say something about how AI might influence the field of education?

Sharmila Mukhopadhyay: Oh, actually, I think it is going to change how we teach right. It’s going to change the classrooms, make them with combination of maybe virtual and in person classrooms.

Sharmila Mukhopadhyay: So there’s a lot of augmented reality which is already, especially in areas, let’s say right now, you know, even to look at these medical imaging guys, you know how they are looking at our building a 3D image of what they got from their MRI and stuff like that.

Sharmila Mukhopadhyay: So that is at a higher level, even at a lower level, maybe in K through 12 I can see a lot of augmented reality coming into the classrooms. So the teachers need to really be also be trained, I think, and constantly be learning every day is how you know, create firstly the optimal mix of in person, hands on versus virtual reality, and how to help the students understand them. And then also the other part, which is very important for the classroom, AI in the classroom, I feel is that sometimes the high achieving students can get a lot and they could be moving in that direction, and then sometimes the students who are falling behind, might be going in the wrong direction. So, which we are learning as we’re offering classes which will direct now.

Sharmila Mukhopadhyay: It’s a slightly different aptitude needed in the teachers a little bit, to stay in part with different students going different directions. And I think AI, will just keep increasing that.

Penny Rheingans: Thank you. Jason, Can you maybe expand on that a little bit and say something more about what K12 teachers can do now to sort of support their students, and that developing the interest and background that the students are going to need to to advanced AI innovation.

Jason Judd: Yeah, I’d be happy to, I mean to be honest, sessions like this afternoon where teachers are learning about all this wonderful work that’s happening that’s, that’s on the cutting edge in terms of research and actually addressing you know COVID19 and healthcare and those types of things. I think educators simply can, can engage in some of these conversations, K12 educators can engage in some of these conversations, talking to experts talking to researchers. You know, I think that’s really helpful so they can talk to their students about these future career possibilities. The other thing that we’re working on really hard is just bringing K12 educators and higher-ed institution faculty and employers together in the same spaces to have these conversations, to think about local partnerships, where they can work together on projects  that connects to what they might be learning in an undergrad program. So, a whole variety of options, but I think it just starts with a partnership and engagement and asking questions and learning more.

Penny Rheingans: Thank you so much. I wish we had more time, but that’s all we have. I’m so grateful to these people for sharing their expertise with us. I’d like to thank them for taking part in this panel.

Penny Rheingans: And I’d like to pass the floor to Shaleen Jain, who will lead the next panel on issues of ethics and society.

Session 4

Shaleen Jain: Good afternoon, I’m Shaleen Jain and I’m a professor in civil and environmental engineering. One of the big questions in the realm of AI work is how to deploy AI in an ethical, safe and fair manner, and this is important at all levels: private citizens, communities, industry and governments. So for the next half hour, let’s focus on these issues at the intersection of technology and society. We have essentially for internal link facets here; social, ethical, policy and legal and so we have to the US today.

Shaleen Jain: Jim Isaac, who’s an electrical engineer and a past president of IEEE Computer Society a visionary in the field of AI.

Shaleen Jain: Jim writes extensively on the future of society, one power bike AI and we have Harland Onsrud who’s an engineer and a lawyer and a professor of spatial informatics at UMaine. He’s the past, chair of the US National Committee on Data for Science and Technology of the National Research Council and finally, we have Peter Guffin, who’s a visiting professor of practice and code, Director of Information privacy law program at the UMaine law school and he heads the privacy and data security practice at Pierce Edward LLP. He has written extensively on cyber security law, information privacy and has consulted and counseled on Wall Street.
So, Jim

Jim Isaak: Okay, I think we have some slides and probably skip immediately to the second one.

Jim Isaak: They’ll know where we are.

Jim Isaak: Virtually let me expand a little on aspects of the spiral because they relate to some of the questions that have come up. Whoops, can you go back one. Thank you.

Jim Isaak: The communications IEEE USA as part of IEEE that deals with federal policy.

Jim Isaak: Go back when we get on and that federal policy issues are things like privacy cybersecurity five g rollout and technology  and so forth. So the topics that we’re talking about here, rolling to that. I chair a committee on the actual applications of communications policy and computer policy and participate in the AI committee as well. I think, well, there’s also part of that activity.

Jim Isaak: Another one of the areas I’m involved with is the IEEE standards work on AI ethics and ALL THESE ARE HOT LINKS so when you get the slides, you can actually go through in depth and see what kinds of documents are hiding behind this. There’s a huge amount of documents on AI ethics. I highlighted one here in the next slide on the AI persuasion or nudging
policies. And then finally, I’m involved with the IEEE collaborative activity where we have some communities that can maintain an ongoing dialogue on some of these topics. And again, those are open to the public. So if you’re interested in some of the topics there, it includes Privacy, and Trust as one topic.

Jim Isaak: And another one that’s just come up is broadband and COVID-19 the experience for rural broadband with that. So those are the kinds of things we’re working with. There, go to the next slide. Okay.

Jim Isaak: Well, anyway, you’ll have a chance to look at the slides as they become available online.

Jim Isaak: Of one of the challenges you’ve heard about earlier, is the fact that a major aspect of AI application today is in advertising and huge industries; Apple, Amazon, Google, Facebook are all driven in part by advertising revenue.

Jim Isaak: So the, that bribe driven with AI behind it creates an opportunity to both maximize the effectiveness of AI bringing forward customers and encouraging them to buy classic marketing, advertising but also influencing them. And one of the challenges we face is that influence is twofold.

Jim Isaak: Certainly there are people out there who want to influence people independent of their actual best interest. So there can be abuse that potentially, in that way. And the second one is letting loose to maximize a specific objective, whether it’s the most people buying a certain deodorant or whether it’s to vote for a certain candidate.

Jim Isaak: One of the problems we have with AI deep learning, particularly when it’s built in feedback loops, is it can determine how to best accomplish its objective in ways that the humans involved with it may not at all understand.

Jim Isaak: One of the questions that popped up earlier was on AI bias and variations of that but AI is, and deep learning are capable of acquiring and applying approaches that the people who created them may not understand so that leads to my

Jim Isaak: Well, and I should have added AI’s are not able to explain themselves. Sometimes people running them can and sometimes you can back engineer reverse engineer and find out why the AI came up with those conclusions, but complex systems like this and always easily reverse engineer and it’s not clear that AI is will never be able to actually explain why they did something, I might add. Humans aren’t really good at that either. So as we move closer to human type intelligence we shouldn’t be surprised if the machines can explain their decisions.

Jim Isaak: The next slide basically focuses on where I see the biggest challenge we face.

Jim Isaak: Abraham Lincoln is reputed to have said this isn’t actually his quote but we’ll use it as his anyway “ You can fool some of the people all the time and all the people some of the time, but you can’t fool all the people all the time.”

Jim Isaak: I think that to be three assumptions.

Jim Isaak: And if you apply AI to that. The real question is, can you fool a sufficient number of people a sufficient amount of the time, which leads to this issue of AI driven persuasion nudging propaganda brainwashing a variety of names for what it can be done there, and there are fairly good the search results coming out about what AI can derive about individuals in terms of their personality and various characteristics.

Jim Isaak: My last slide.

Jim Isaak: Useless side is the next one.  It points to a couple of the research people doing work on this Mikhail Kaczynski at Stanford is doing work on analytics and privacy and psychological analysis using AI technology and Facebook.

Jim Isaak: So that’s one new reference point for that kind of research.

Jim Isaak: There’s also efforts. The school Stanford had,  on using persuasion, Yeah, using technology that actually that program is now closed down. However, I noticed, Oxford, has a similar program on the online persuasion technology. So, this area is being something that studied, we are starting to understand that it can be done and how it can be done and the use of AI and applying to. It is multidisciplinary. This is not something that the technologists understand it’s not something a psychologist understand it requires that multiple disciplinary perspective and leads to some really challenging questions about will their AIs, not maliciously not the Terminator type AI, but simply AIs trying to optimize a particular objective will start to influence us in ways that go beyond our control or our awareness.

Shaleen Jain: Thank you Jim.

Jim Isaak: Thank you.

Shaleen Jain: Thank you, Jim, Harlan

Harlan Onsrud: Slide one

Harlan Onsrud: Okay, well, as you’ve been hearing from most of the previous speakers, the positive societal applications of AI are myriad

Harlan Onsrud: Many of the applications, particularly those focused on advancing science tend not to raise human privacy or human autonomy concerns and indeed promise great benefits for Humanity, yet many current deployments have already given rise to substantial negative societal effects.

Harlan Onsrud: Most of us interact with developed algorithms on a daily basis, yet most of us are unaware of those interactions and most of us are unaware of their effects on our daily lives, our legal system, combined with technological advancements has made massive accumulation of data possible, as we’ve been hearing. By knowing your digital tracks in detail AI techniques are better able to analyze your past action patterns and predict your future actions as well as those of millions of other people and your interactions with those other people.

Harlan Onsrud: Algorithmic operations applied to big data are being used to classify and target individuals to sell you goods and services more effectively and influence your decisions and attitudes in social and political contexts, up to and including voting.

Harlan Onsrud: Modern decision guidance systems are hurting us in our choices. These conditions have also made massive wealth concentrations possible. Slide two.

Harlan Onsrud: I’m not going to go into details but AI has been a major factor in automated propagation of discrimination. It has greatly facilitated the expansion of income inequality and will continue to be a growing contributor to massive job loss globally.

Harlan Onsrud: And many of the negative effects of AI are a result of a US legal framework which is overly supportive of surveillance capitalism.

Harlan Onsrud: Now, I would argue, of course, that capitalism is good, but not all capitalistic approaches are well suited to support justice in a democracy.

Harlan Onsrud: Jason Linear, the father of virtual reality argues that there is more than one way to build an information society and we in the US have chosen the self destructive path.

Harlan Onsrud: So let me just suffice it to say that AI advancements have been complicit in advancing serious widespread and substantial challenges for democracies. Slide.

Harlan Onsrud: So how do we flip this around? How do we alter some of the applications of AI to convert they’re use to positive forces in improving societal conditions?

Harlan Onsrud: And I think research is going to be a large part of the solution and one fundamental question. We are currently pursuing here in Maine is how many information societies enable rich opportunities for all humans to more equitably sharing the financial benefits of vibrant information economies while also supporting the agency and autonomy of individual humans. That is how we can bring ethics back into the equation Slide.

Harlan Onsrud: So a current research focus at the University of Maine is on the development and exploration of ethics driven market based combined legal technological models technology by itself is not the answer, but neither is merely tweaking the legal system. We need to explore a range of combined legal technological approaches.

Harlan Onsrud: And then deploy those models through proof of concept software development.
In order to illustrate just exactly how those solutions could actually work. So again, we need to get back to basics. So my last slide is, if you’re interested in the details of some of our current work on these issues. Here’s simply a recent paper of potential interest.

Shaleen Jain: Thank you Harland.

Peter Guffin: So Harland, that was a great introduction to my presentation on the intersection between data protection laws and regulations and the use of AI technologies.

Peter Guffin: I keep getting switched back and on and off to mute and unmute I believe I’m being heard. Is that correct?

Ali Abedi: Yes, Peter. Good.

Peter Guffin: So, Harlan, thank you, you introduced this connection between data protection law and the use of AI technologies. Very well.

Peter Guffin: What I thought I would do in this segment is really focused on that intersection and to really put up a, shine a light on, the fact that the AI systems are many of them that we’ve heard about today, actually.

Peter Guffin: The inputs, our personal data and the outputs as well from these AI systems, many of them constitute personal data.

Peter Guffin: You know, these outputs representing statistical guesses not factual information about individuals and they’re used to classify people make predictions about people make judgments and decisions affecting people

Peter Guffin: So from a data, data protection perspective, one might think of an AI system as actually manufacturing personal data about an individual

Peter Guffin: So recognizing that there are, you know, lots of positives as Harlan just mentioned.

Peter Guffin: There are also, you know, risks with respect to unfair discriminatory outcomes perpetuation of existing socio economic disparities.

Peter Guffin: So data protection regulators across the globe are actually beginning to address the use of AI technologies attempting to mitigate those risks, recognizing that AI technologies are really you know, becoming increasingly sophisticated and widespread.

Peter Guffin: So the focus of looking at, or my reason for wanting to compare different countries and how they regulate AI is is that I think putting up, put putting some attention to the similarities and the differences is useful in helping guide organizations guiding policymakers with respect to how to comply with data protection rules and principles and ethical requirements with respect to the use of AI.

Peter Guffin: So to that end in this presentation, I’d like to look quickly at the the FTC, the Federal Trade Commission here in the US, I’d like to look at the UK information commissioner’s office and how they are approaching AI regulation and then and then the last would be the Singapore personal data protection Commission.

Peter Guffin: I chose these three because each represents influential data protection regulators. So with respect to the next slide with respect to the Federal Trade Commission.

Peter Guffin: You know the they will they will be the first to recognize that they’ve had a long standing relationship with regulating algorithms in the financial sector and that is through their enforcement powers with respect to the fact of the Fair Credit Reporting Act.

Peter Guffin: In their role as an enforcer of the Federal Trade Commission act which prohibits unfair and deceptive trade practices.

Peter Guffin: You know, they also have played the leading role here in the US with respect to data protection regulation so they recently, as recent as April of this year, issued a guidance, which set forth some key principles.

Peter Guffin: And they’re listed here on the slide. And they may look familiar to some who are familiar with a fair information privacy practices.

Peter Guffin: But they go a bit further and they actually build upon some of the principles in the Fair Credit Reporting Act, the guidance, however, fell short, in my view, in terms of really not being very specific and helpful and useful in addressing the unique challenges in the AI context other than this FTC act.

Peter Guffin: There’s, there really is no comprehensive federal data protection regulation with respect to AI here in the US. So turning our attention to the UK which has as it’s lead data protection Enforcer. The information commissioner’s office.

Peter Guffin: They’ve actually been leading the way. I think in the world around regulation of AI again from a data protection compliance perspective. Other words, trying to address the risks to fundamental rights and freedoms of individuals, the kinds of interests and rights that Harlan was focusing on in this presentation and in their work. They have come up with an auditing framework which actually is useful for them in terms of auditing organizations that are actually using AI to make sure that the organizations are complying with the Data Protection Act in the UK and the GDPR which is the European data protection law.

Peter Guffin: And what they do is really, I think, do a really good job of drilling down into some of these more nuanced issues with respect to what constitutes fair lawful and transparent processing they dive deeply into this explain ability issue that was first discussed today by Carla broadly.

Peter Guffin: And one of the things I really liked about the UK is they really brought forward this concept of a data protection impact assessment. When an organization is either developing or deploying some AI technology.

Peter Guffin: And they also yeah

Shaleen Jain: Go to some questions and let all the speakers respond. So actually, that

Shaleen Jain: So the first one I will pose to you is, can you address the employee occasions of the use of AI technologies in education occasion with regards to the protection of FERPA data.

Peter Guffin: Yes, very, very timely subject, isn’t it.

Shaleen Jain: Yes.

Peter Guffin: Yeah, it comes up in the context of both FERPA and also, it comes up in the connection of compliance with the myriad state laws that actually go further than for but you’re actually addressing the online providers who are actually providing these different solutions that are being used to collect information for useful educational purposes, but the use of that information for purposes of veteran forming their algorithms to better inform teachers and educators about their performance.

Peter Guffin: It’s an area of in of increasing interest at a, both the federal, the Federal law here is FERPA, as you mentioned,

Peter Guffin: But there are various state enactments and other laws like the California Student Information Privacy Act, which has been found in which has been actually followed in a lot of other states, including Maine.

Shaleen Jain: Excellent. Great, thank you. So second question address to Jim.

Shaleen Jain: The, how are the ethical and social aspect being integrated into K-16 education, focused on computing AI and so on?

Jim Isaak: I’m not directly in the area of dealing with how to integrate this into the education system, but unfortunately, ethics, has not received a very high visibility in any level of education, university down and so the social impact issues are both a high concern and I will give you a quick example of that in a minute.

Jim Isaak: And I’m not very not formally put into the curriculum at almost any level short of the university where you typically do get ethics as a topic.

Jim Isaak: I’ll give you a quick example. I’m now judging the young inventors program here in New Hampshire, and one of the inventors was demonstrating machine that would go next to your mailbox and be used immediately.

Jim Isaak: Shred junk mail so that your personal information on that junk mail would not end up in your wastepaper basket visible to somebody who might read that.

Jim Isaak: Yeah for fourth grader. I thought that was a wonderful understanding of privacy issues and I’ll take it.

Jim Isaak: But yeah, it goes much more much deeper and the problem isn’t until people understand the kind of issues that Peter and other raise.

Jim Isaak: In Peter’s comment about manufacturing of personal data is dead on so people understand that’s what AI is doing on a broad basis, across the board, and our online systems and how that impacts their lives. They’re basically running blind. They’re not in a position to apply an informed consumer informed voter understanding of what’s going on.

Shaleen Jain: Great Harlan, would you like to add to that?

Harlan Onsrud: Well, a couple of things.

Harlan Onsrud: Yes, certainly the ethics or something that we’re dealing with in the computer science curriculum is mandated, of course, by a bit accreditation and we spend a huge amount of time we make sure our computer science students are at a level high enough to really understand the computer science. So, we typically teach us at the junior, senior level really delving into the ethics.

Harlan Onsrud: The ramifications of these technologies that you are going to be helping foist on the world. So certainly at that level we’re doing that. Somebody had raised the issue of ethics before. And of course, when we’re dealing with ethics for talking about concepts like just as non male theism, beneficence, but one of the key ones here is autonomy, autonomy of the individual in these instances. So one of the things we’ve been focused on is really trying to say. Alright, so how can we enable individuals much greater control over their own information exposure and what’s the legal technological model that would allow us to best enable that. And then we can start talking about legal different legal paradigms, whether they’re in the, you know GDPR and Europe.

Harlan Onsrud: UK Singapore, the US constitutional system. What are some solutions that we can actually pursue right now. And I think if we get back to the kind of basic ethical foundations.

Harlan Onsrud: They provide us the philosopher’s actually provide us with some very subnet of direction for us to move in for the engineering software engineering community as well as the business community where we need to be heading.
Shaleen Jain: Thank you.

Shaleen Jain: Let me pose this last question to all of you. And let’s start with Peter. Can you touch upon ways in which add data privacy laws and standards are being upheld in epic occasions such as autonomous vehicles and so on.

Peter Guffin: Every question.

Peter Guffin: The reality is there really isn’t any case law that’s developing in this area just yet and there’s a lot of self regulatory approaches, for example, in the automotive area, you know, there’s been for quite some time now, you know, a self regulatory approach to how to manage all the data and how to find ways to provide the right calibration with respect to both security of that data as well as the privacy implications of collecting all that data about individuals.

Peter Guffin: Most of the most of the self regulatory approaches. Most of the ways in which regulators are thinking about this globally really come down to the to the tenants that are set forth by the FTC, you know the the transparency that notice right the explain ability, making sure consumers are aware of what what actually is being captured you know, but there are real challenges in the AI area in terms of matching the AI context to non AI contexts in which we are familiar so mapping the Phipps, for example, the fair information privacy practices to AI.

Peter Guffin: It’s not a complete solution by any stretch and I think regulators and consumers and manufacturers of these.

Peter Guffin: The users of these AI technologies are realizing that and building into their compliance programs, something that is sensitive to the different governance frameworks that are being developed.

Shaleen Jain: Thank you.

Shaleen Jain: I’ll hand it off to my colleague Ali Abedi and he’ll wrap up.

Ali Abedi: Thank you very much again. I would like to thank all the panelists today for the great presentation and also my fellow moderators for keeping us on time.

Ali Abedi: Also thanks everyone to all the attendees today and with your very thoughtful questions make sure that the unanswered questions will be answered on our website.

Ali Abedi: And I would like to thank the office of the Vice President for Research for hosting this webinar recording, Dr. Kody Varahramyan for his vision behind this webinar. Also, my colleagues Tammy Crosby and Tilan Copson on the back end that were basically organizing and coordinating all this so this webinar was the tip of the iceberg. We try to barely scratch different aspects of a very broad area of AI and machine learning.  In the fall, we are hoping to start his seminar series which will be dedicated to one topic at a time bit more depth. So look for more announcements and again, thanks very much for joining us. And this will conclude our webinar for today. Have a great day.