AI and Trust
Talk Abstract: Trusting a friend and trusting a service are fundamentally different. The former is personal and intimate, while the latter is impersonal and can scale to all of human society. The companies behind the current generative AI systems are poised to exploit that difference. Their intimate conversational nature will cause us to think of them as friends when they are actually services, and trusted confidents when they will be actually be working against us. Moreover, any serious AI application requires us to be sure that the models are secure. The second is a matter of technology. The first is a matter of policy. Both will require government regulation of the industry, which is how we create social trust in our society.
Bio: Bruce Schneier is an internationally renowned security technologist, called a "security guru" by the Economist. He is the New York Times best-selling author of 14 books -- including A Hacker's Mind -- as well as hundreds of articles, essays, and academic papers. His influential newsletter Crypto-Gram and blog Schneier on Security are read by over 250,000 people. Schneier is a fellow at the Berkman-Klein Center for Internet and Society at Harvard University, a Lecturer in Public Policy at the Harvard Kennedy School, a board member of the Electronic Frontier Foundation and AccessNow, and an advisory board member of EPIC and VerifiedVoting.org. He is the Chief of Security Architecture at Inrupt, Inc.

