|
![]() |
_______ FiRe 2019 Speaker Spotlight
Following the very public hack of Sony Pictures, CrowdStrike produced attribution to the government of North Korea within 48 hours. In 2014, it outed Putter Panda, a state-sponsored Chinese group linked to spying on US and European defense, satellite, and aerospace industries. It was instrumental in attributing the 2016 attacks on the Democratic National Committee to Russian intelligence services. Who, you might ask, is the technical mind behind all of this? That would be co-founder and CTO Dmitri Alperovitch, who joins us for a Centerpiece Conversation at this year's FiRe conference. In addition to leading the CrowdStrike teams that detect and stop breaches across customer environments, Alperovitch designed the architecture for the CrowdStrike Falcon platform, which stops breaches by combining next-generation antivirus, endpoint detection and response, and proactive hunting. And he led the development of the 1-10-60 breakout rule, which is becoming the industry standard for the speed required to stop an intrusion from becoming a breach. He has been named to Fortune magazine's "40 Under 40," the Politico 50 list, Foreign Policy's "Top 100 Global Thinkers," and MIT Technology Review's "Top 35 Innovators Under 35." He is also a senior fellow at the Belfer Center for Science and International Affairs at Harvard's Kennedy School and serves as president of the CrowdStrike Foundation, a nonprofit established to support the next generation of talent and research in cybersecurity and artificial intelligence through scholarships, grants, and other activities. We're delighted to be welcoming Dmitri Alperovitch to the stage at Future in Review 2019, Oct 8-11, in La Jolla, California. Learn more about FiRe and register here.
A lot of ink, time, and energy have been devoted to the power of artificial intelligence, and how society can possibly deal with potential runaway systems before it's too late. If we aren't faced exactly with a Terminator-type fright-scare, even the more sober portrayals of risk are still stark in their warnings. In today's issue, I'll take a closer look at the good and bad news around AI, with the primary focus being not upon the end of the-world-as-we-know-it by AI's invisible hand, but by the obvious failures that limit at least today's implementations to the few things AI does very well, and the many that it cannot do at all. |