Stanford HAI 2019 – Introduction to Stanford HAI: Fei-Fei Li

Stanford HAI 2019 – Introduction to Stanford HAI: Fei-Fei Li

February 28, 2020 0 By Kody Olson


– It is my great pleasure to
introduce a visionary leader in the field of artificial intelligence. Fei-Fei Li, as you know, is
co-director of Stanford’s Institute for Human-Centered AI and a professor of Computer
Science at Stanford. She was director of the Stanford AI Lab from 2013 to 2018 before she
became HAI’s co-director. She also currently co-directs the Stanford Vision and Learning Lab. Between January 2017 and
the early fall of 2018, she joined Google Cloud
as chief scientist of AI and machine learning during
her sabbatical from Stanford. She focuses her research
on machine learning, deep learning, computer
vision, and cognitive and computational neuroscience. She’s widely published
in top-tier journals and conferences and is one
of the most cited researchers in computer science in her generation. She’s also a leading voice advocating for diversity in STEM and AI,
and she co-founded and chairs the AI education non-profit, AI for All. She received a Bachelor
of Science in Physics from Princeton University
and a PhD from Caltech. She served on the faculty at Princeton then at the University of Illinois Urbana-Champaign before
joining us at Stanford. We’re honored to have her
at the helm of Stanford HAI. Please join me in welcoming Fei-Fei Li. (audience clapping) – Thank you, thank you. Wow, welcome, what an exciting day. I wanna thank Mark for sharing with us such a fantastic vision
for Stanford’s future. I know I speak for Jang and myself along with so many of our colleagues, students, and collaborators
here at Stanford about our excitement for
what we’re building together. Now, I wanna share, begin this talk by sharing with you an experience I had as an undergraduate research
intern many years ago. It was 3:00 AM in the morning. I was huddled in a darkened lab with a group of fellow students. Before us was a cat
under general anesthesia with an array of electrodes carefully inserted into its visual cortex. We connected the electrodes
to a loud speaker and projected moving lines
before the cat’s eyes. Its neurons fired in response, and through the speaker,
we could hear them. Each crackling sound was a glimpse into the inner workings of a living brain. Together, they played the symphony of a mammalian visual system. The impact of that moment
remained with me to this day. It was a recreation of an experiment originally performed by neurophysiologists David Hubel and Torsten Wiesel. Their work was a breakthrough
and a true turning point in the field of neuroscience. It laid the foundation
of visual intelligence, earned them a Nobel Prize,
and provided a blueprint to the neuronetwork architecture that is transforming the
world through AI today. And they did it all in 1959. It wasn’t just a feat of science. It was a true feat of imagination. It was the same spark of curiosity that drove thinkers from
Plato to Descartes to Turing to look inward towards
the mysteries of the mind and dare to ask how it works. Where others saw magic, they
saw a scientific frontier ripe for exploration, and so as our understanding of our world push ever outward through physics, chemistry, engineering,
and even mathematics, our understanding of ourselves
reached further inward. Today, we find ourselves surrounded by the product of that imagination. It’s an era of connection, information, and technology so advanced
that many are calling it the fourth industrial revolution. It’s world that offers opportunity
at an incredible scale. But it presents perils as well. Artificial intelligence, a
technology that has grown from an academic niche to a global
phenomenon in less than a decade has revealed itself to be both. There’s no denying of
it, intelligent machines have the potential to do harm. They’re susceptible to algorithmic bias, and they raise concerns over privacy, security, and job displacement. These are the pitfalls of AI. They affect real people, and
they demand urgent action. As technologists, it’s our responsibility to address the failings of our tools. But it’s also our
responsibility to realize the full extent of their potential. The world of 2019 is
extraordinarily complex, perhaps verging on the limits
of our own understanding. And we need smarter technology
to help us make sense of it. This is the opportunities of AI. Consider climate change,
despite its urgency, we struggle to collect data at the scale and granularity necessary
to fully understand it and to coordinate energy-saving
practices at a society level Intelligent technology can help. It can integrate data at a global scale from satellite images
to electric grid sensors and to even autonomous drones across continuous time and space, and all these can help us optimize the way we allocate resources from
data centers to our homes. Then there’s healthcare, access to which remains a luxury beyond the reach of so much of the world’s population. But AI is already quietly
transforming diagnostics as computer vision algorithm turns clinical insight into
a low-cost technology. Or the sheer burden of information itself, documents, records, even scanned paperwork on a scale far beyond our
ability to process as humans. How much useful knowledge remains undiscovered in this digital clutter. With image recognition and natural language processing methods, document understanding is
making it possible to find out. These are stories of how AI can augment its human counterpart,
how it can provide backup and support for the ingenuity
we naturally possess as the world surpasses
our ability to keep up. The question is can we have
the good without the bad? To find the answer, we have
to imagine on a larger scale than any other time in history. And the search will mark the
beginning of a new chapter in AI, one that puts the need
of humanity at its center and incorporates a global diversity of expertise, ideas, and perspectives. It’s in this spirit that we announce the establishment of Stanford’s Institute of Human-Centered AI or Stanford HAI. Its mission is to advance AI research, education, policy, and practice
to improve human condition. It aims to be a global hub for dialogues allowing every discipline to
bear on the challenges of AI. And it’s an invitation
to all of you to join us in imagining the future, together, by committing to three
fundamental principles. First, in order for AI to, um, to develop properly, we need to pair it with an ongoing study of its impact on human society
and be guided accordingly. Next, the ultimate purpose
of AI should be enhanced and augmented by our humanity, oh, should enhance and augment our humanity, not diminish or replace it. And to achieve the full potential of AI and for it to better serve our needs, it must evolve to incorporate
more of the versatility, nuance, and depths of human intellect. Let’s start with the technology itself. AI has developed some truly
remarkable capabilities in recent years, but it remains lightyears behind the nuance, subtlety, and flexibility of our own intelligence. When an algorithm is driving
a car on a crowded street or contributing to the
outcome of a parole hearing, fluency in human experience is essential. Let’s use the simple photo
to illustrate the difference. The most advanced computer
vision algorithm today would have no problem identifying if there are objects present,
such as a dog, a couch, a person, a coffee cup, and so on. But what do we see as humans? Not only can we tell the couch is damaged, but we know the dog is responsible. We can also tell based
on the simple posture that the owner is not very happy. Perhaps best of all, we
can read the look of guilt on the dog’s face, and we
see it all immediately. This is what our minds
do, we recognize context and evoke memories effortlessly,
and we use analogies to understand new ideas in familiar terms, recreate associations that
can trigger everything from the inside of a hypothesis
to a piece of poetry. We sense causal
relationships between events, turning a sequence of
moments into a story. And in the presence of other mind, a powerful emotional
dimension emerges as well. Today’s AI is impressive, but the richness of human intelligence remains unmatched. Of course, we recognize that making AI more sensitive to the full
scope of human cognition is no simple task, the solution
would depend on insights from neuroscience, cognitive
science, and psychology, which is why Stanford HAI
is dedicated to fostering collaborations with experts
in many of these fields. Now, I want you to join me in imagining what a technology with this kind of depth and versatility might be capable of. For example, the AI of
today is transforming medical diagnosis as algorithms make detection of diseases faster, more precise and more consistent. But this is only the beginning. Put yourself in the shoes of
a clinician in a crowded ER. You’re surrounded by people who need help. But your first responsibility is triaging the most urgent cases. This may be manageable
on an individual basis. But as the crowd gets
bigger, your time and effort has spread thinner and thinner. Further complicating matters,
every patient communicates in their own way and faces circumstances that you may not see, you
may only see in glimpses. And those needs can change at any time. How might the next generation AI-powered triaging system help? Imagine, it can speed up
preliminary diagnostics by understanding the context
of limp or slurred speech, cross-referencing its observations with the patient’s medical records. Imagine that it can make educated guesses about the patient’s emotional state based on their face and posture. And imagine, it can keep
an artificial eye and ear on every patient while they wait, watching for changes of their
medical and emotional state, and keeping the clinician up-to-date. And imagine, it all works in real time for everyone in the ER, the
effect would be transformative. Clinicians would remain
face-to-face with their patients but with less stress and greater focus. Each interaction would begin
with the insightful headstart. And in the ER, saving time
is often saving lives. From here, it’s not hard to imagine a similar application in education in the form of an AI-powered
teaching assistant that can monitor class
safety, review homework to gauge student comprehension, flag possible strength and
weaknesses along the way, and even assisting with
the grading process. All the while, the teacher
is able to focus on connecting with her students on a more personal and
less distracted way. And speaking as an
instructor of Stanford’s lar, one of Stanford’s largest
deep learning classes, I can tell you it’s
really exciting to imagine an endless supply of free AI TAs. Even tasks like search and rescue can be made safer and more effective. Imagine low-cost drones being
deployed in a disaster area. Imagine how such technology
would build a real-time, 3D map of the environment, locate people in need, and generate safe, optimal pass to help. Such algorithms could even
share this information with fire departments,
blood banks, hospitals, local governments, and
the media, empowering first responders and
maintaining public awareness. The list just goes on,
but the story is the same. As human-centered AI develops
the basic skills of judgment and perception that serve
us well in simpler times, it can scale them to the need of an increasingly chaotic world in all the ways human can’t. In turn, our own abilities can be focused on higher level tasks,
safer, less repetitive, more creative, and
ultimately more meaningful. Rather than replacing us, it can make us better at what we do. This brings us back to the pitfalls of AI. Now, it’s important for
all of us to remember that this is a unique time in history, and we are part of the first generation to see this technology
migrate from the lab to the real world at
such a scale and speed. For the first time, the ethics of AI isn’t an abstraction or
philosophical exercise. This technology affects real
people living real lives. That makes it our responsibility to understand the full
impact of this technology, to anticipate the way the
world would change and respond and to guide it
accordingly, in other words, it’s time to make ethics
a fundamental part of research and development in AI. Let’s start with algorithmic
bias and job loss. These are significant issues, and the solutions would take many forms. Some biases are technical for instance and can be addressed
with statistical methods that can debias data sets automatically. But human measures are vital as well such as greater representation
in the tech industry. And when it comes to job loss, while technology can play a big role to rescale our impacted workforce, policy will also be essential in providing options and support for them. But the complexity don’t stop here. How, for example, will
our law change as AI play an increasingly collaborative
role in human tasks? As human specialists rely more and more on machine intelligence, what
will become of our notion of accountability, who is liable
when something goes wrong? The time has come for engineers
to embrace these questions and for policy makers, legal scholars, and ethicists to help find the answer. Historically, the fact of new technologies tend to reverberate far beyond the capabilities they deliver. The automobile, for
example, was simply intended to help us get around faster. But a century later, the entire world has been reshaped by
highways, parking lots, and gas stations, not
to mention global impact on natural resources and the climate. What will widespread
intelligent machines do to our communities, how will
they impact our culture? Already, we live in a world
that seems to remind us of technology’s pitfall on a daily basis. Intelligent technology for
all its amazing capabilities will also raise the stakes even higher. These are big and tough questions and they deserve to be taken seriously. They demand, we dedicate time, resources, and a wide range of expertise
to understand this technology, to anticipate the future it will bring, and to guide its course responsibly. And above all else, it
is a call for humility. These are the three principles
defining human-centered AI by ensuring its design is human-inspired, its roles in society are human-friendly, and its impact is human-safe. Now, I’d like to talk about the goals of Stanford HAI as an organization. Naturally, we aim to promote
breakthrough research that advances the state of the art while collaborating
with the brightest minds in medicine, law, economics, humanities, and many other disciplines. Next, to encourage an
inclusive, ongoing dialogue, HAI is working to establish a global forum for thought leadership
that brings together policy makers, academics,
business leaders, journalists, and civic society in general. And many of you are already
sitting in the audience today. We look forward to candid conversations that confronts the toughest
questions about our future and keep a global audience informed. HAI will also work to advance the depth and scope of AI
education for its students as well as promoting AI
literacy beyond Stanford campus with outreach programs
that train and inform leaders and practitioners in the industry, government, and the media. Finally, it’s not enough
to simply talk about AI. We have an obligation to promote meaningful changes in the real world. This means sharing research and technology with those who can benefit most from it, and delivering studies, data, and expertise that shape policy. It’s no coincidence that all
these are happening at Stanford The tradition of imagination
is alive and well here and has been since the
earliest days of AI. In many ways, the first chapter of AI began with the founding of the Stanford AI Lab or SAIL in 1963. SAIL is now home for more
than 100 AI researchers, students, and visitors, including myself and my students for the past 10 years. It’s where some of the
foundational milestones of AI history happened,
from self-driving cars to computers that can see, from algorithms that help fight cancer to robots that dive deep under the ocean. Now, now, we’re setting an example
for AI’s next chapter. Its transformation into
a multidisciplinary field that extends far beyond the
borders of computer science. This is perhaps the single
greatest shift in AI’s history. And Stanford is the ideal place for it. Our schools of engineering,
law, medicine, education, business, humanities and natural
sciences and earth sciences will each play an important role in making AI a diverse
human-centered pursuit. But the role of academia is
even more fundamental than that. It’s where the imagination is truly free. Tomorrow’s AI leaders will need to explore every avenue of inquiry, not just those most likely to bare commercial fruits. They’ll need the same scientific freedom that fostered global
cooperation and endeavors like Large Hadron Collider. Not to make our phones
faster or Wi-Fi cheaper but to catch the first
glimpse of the Higgs-Boson. It’s easy to let the term
“artificial intelligence” conjure images of automation
and blinking servers. But to me, AI is a story about humanity, the human values we want
our technology to embody, the people who develop it, and
the communities it affects. And this is what gives
me hope for Stanford HAI more than anything else,
the people it’s made of. Many of these people are my friends, colleagues, and students
here at Stanford already. My colleagues in the AI lab and
the Stanford Woods Institute for example, are working hard to better understand global poverty. Their model learns how
features of satellite photos correlate with income,
generating detailed maps of wealth inequality for
NGOs and governments. Or my colleagues at the Stanford
Computer Science Department and the School of
Education, helping develop assistive tutoring agents that
deliver one-on-one education to students in underserved community. Or my friends at the
Stanford School of Medicine, training deep neuronetwork
on EEG data to develop novel rehabilitation
therapies for stroke patients. This is what human-centered AI look like. And what’s even more exciting is there’s more of them coming. Just last weekend, a
group of newly admitted computer science PhD students
came to visit us on campus. This is annual tradition here at Stanford, and it’s an incredible
experience every year. These young students come from
every corner of the world, energetic, inquisitive, and
bubbling with creativity. Their talent is the reason we teach, and their enthusiasm is
what compels us to mentor. But something was different this year. From the moment they arrived, I found myself hunted by students who wanted to know more
about human-centered AI. It wasn’t just the
technology fascinated them. They wanted to use it
to do good in the world. One student planned to apply natural language processing to sociology. Another was interested in
the intersection of robotics, medicine, and the policy
questions that raises. One after another, their vision
challenged and inspired me. They were compassionate,
empathetic, intensely creative. So the next time you think about AI and what it means for the future, think about people like these. They are a reminder that AI can be more, so much more than money,
luxury, or convenience. It can be about a vision for
a better world for everyone. This is what Stanford HAI was founded on, empowering bright minds to
imagine the future of AI together as engineers, as scientists, as experts from enumerable disciplines, and above all, as humans, thank you. (audience clapping)