Algorithms, AI and the alignment problem

Posted by Mike Walsh

Dec 9, 2020 4:08:49 AM

Brian Christian


As we become more reliant on AI, algorithms and automation to manage the complexity of our world - we also face the challenge of systems that not aligned with our values or even our intentions. 

For Brian Christian, this is not unlike the position of Mickey Mouse in the Disney classic, ‘The Sorcerer’s Apprentice’, who suddenly finds himself overwhelmed by magic beyond his control. In his words, ‘we conjure a force, autonomous but totally compliant, give it a set of instructions, then scramble like mad to stop it once we realize our instructions are imprecise or incomplete—lest we get, in some clever, horrible way, precisely what we asked for’. 


Brian Christian is one of my favorite writers on AI, with a unique perspective that is very much at the intersection of computer science and philosophy. He is the author of the acclaimed bestsellers ‘The Most Human Human’ and ‘Algorithms To Live By’, which have been translated into nineteen languages. A visiting scholar at the University of California, Berkeley, he lives in San Francisco. His latest book, and the subject of this interview, is ‘The Alignment Problem’. 



In this episode you will learn:


02:30 Brian’s origin story

05:01 Convergence: social sciences & tech

08:20 The alignment problem defined

14:12 AI Safety & the black box problem

18:28 Addressing the issue of bias

27:18 Business vs technical questions

37:06 Reinforcement learning

45:40 Challenging the system

52:45 Future skills




Disinformation wars, robot uprisings and differential privacy

Posted by Mike Walsh

Oct 8, 2020 8:15:47 AM



A US election is far more than just a struggle for the most powerful job in the world; it also provides a glimpse into consumer attitudes and emerging technologies designed to influence opinion. It was during the 2012 US election, for instance, that social media, online data and e-commerce profiling was leveraged for the first time to create a hyper-targeted, digital political campaign, that ultimately swept the Democrats into power. My guest this week, Harper Reed, was intimately involved in that strategy, having served as CTO of the Obama 2012 campaign, where he was the first to bring the mentality and connective capabilities of the tech industry to the political stage.


I’ve known Harper for a number of years now. We met on the technology speaking circuit, and stayed in touch with a common passion for Leica rangefinders and film photography. Harper has a unique perspective - especially on those shadowy areas where disruptive tech intersects with society, culture and economics.


Prior to his current (stealth) startup, he worked at PayPal, as Head of Commerce and as an Entrepreneur-in-Residence. The technology he developed as co-founder at his business Modest Inc. had garnered the attention of PayPal, which led them to acquire Modest only a few years after launch. He was also the CTO of, where he pioneered crowdsourcing and grew the company from a 12 person startup to a multi-million dollar enterprise. Harper is an MIT Media Lab Director’s Fellow, sits on the advisory boards for IIT Computer Science and the Royal United Service Institute, and is on the Cornell College Board of Trustees.



In this episode you will learn:


0:00 Living in the dystopia we deserve

05:58 Lessons from the 2012 election

18:33 The 2020 political tech stack

21:58 The rise of the robots

28:23 Differential privacy

38:26 The ethics of unintended consequences


CATEGORY: Technology

How Mastercard uses their innovation lab to co-create with their customers

Posted by Mike Walsh

Oct 1, 2020 1:58:41 AM



Despite all the talk about fintech disruption - banking, finance, and payments remain highly regulated industries with well-established competitors who are not standing still - even in this moment of rapid change and digital reinvention. Mastercard is one such company. They created Mastercard Labs as a way of co-creating and innovating with their customers. The idea was to turn external signals into opportunities, and in' doing so, de-risk' them for their clients and for the rest of the organization. So, for example, they are working on quantum computing with IBM. They have established their own proprietary blockchain and several AI projects and were behind the Apple Card's tokenization, which allowed it to work without the standard 16 digits.


To learn more about how Mastercard manages their innovation portfolio, I spoke with Ken Moore, who is the Chief Innovation Officer at Mastercard and also Head of Mastercard Labs. Ken is responsible for the company's R&D initiatives globally. Prior to joining Mastercard, he was a Director at Citibank, where he established Citi's first Innovation Lab and subsequently expanded it globally to create a network of collaborative innovation centers across the company.



In this episode, you will learn:


0:00 How the pandemic has changed consumer behavior

3:00 Why Mastercard needed an innovation lab

10:16 The importance of de-risking innovation

13:51 How to use innovation portfolios

21:14 The future customer journey

26:49 Engaging the startup community

29:38 Learning from the Chinese payments ecosystem

33:07 Mastercard’s 7 innovation portfolios



The future of education

Posted by Mike Walsh

Sep 24, 2020 5:13:38 AM

Dr Shawn


This new age of smart machines will still need humans - but arguably, they will need to be ones who can think, create and make decisions in very different ways than the workforce of today. As with the first Industrial Revolution, reinventing education will be a priority going forward, especially if we are to survive the automation-led shakeup to jobs. To find out what it might take to transform schools and learning, I spoke with Dr. Shawn K. Smith , an education futurist and chief executive officer of Modern Teacher. Shawn also sits on the board of The Futures Institute, an organization dedicated to providing global insights on complex local problems.



In this episode, you will learn:

  1. The genius of John Dewey [00.22]
  2. The four phases of society and how they impact learning [04.03]
  3. The emerging digital education ecosystem [07.34]
  4. What it takes to be a 21st century educator [14.07]
  5. How AI is changing learning and instructional models [18.36]
  6. Ethics, transparency and algorithmic risk [24.39]
  7. What skills will tomorrow’s kids need to survive the 21st century? [28.31]


CATEGORY: Education

Using AI in the war against fake news

Posted by Mike Walsh

Aug 27, 2020 8:41:39 AM



Professor Tim Tangherlini calls himself a computational folklorist. Like many fields of research lately, folklore is a field where both the tools and objects of study are being profoundly reimagined by AI. I came across Professor Tangherlini's work after reading a research paper that he and his team published on using AI to study the structure and dissemination of conspiracy theories. Their research points the way to strategies that might defeat fake news by explaining how the elements of a conspiracy narrative come together and how they can also quickly fragment if some key parts are removed or challenged. 


Professor Tangherlini is currently in the Department of Scandinavian at UC Berkeley, where he also serves as graduate advisor in the Folklore program. He has worked on computational approaches to stories and storytelling over the past three decades. Under the auspices of the NSF's Institute for Pure and Applied Mathematics, he co-directed a program on Culture Analytics, as well as an NEH Institute on Network Analysis for the Humanities. He is the author of several books, and dozens of articles. He has done extensive fieldwork on storytelling among paramedics, and shamanism in South Korea, as well as archival work on rural 19th century Denmark. His current work focuses on generative models of common story genres such as legend, rumor, personal experience narratives, and conspiracy theories.



In this episode, you will learn


  1. How has technology is changing the way we think about stories, mythology, and culture (00.06)
  2. What Tim and his team learned from using AI to study the Pizzagate and Bridgegate conspiracies (09:40)
  3. Why crowdsourcing and conspiracy may be two sides of the same coin (21:11)
  4. The dangers of using AI to weaponize misinformation (23:41)
  5. The future of culture analytics, computational folkloristics, and how algorithmic feeds shape our consensual reality (25:31)

CATEGORY: Technology