揭开了迈克尔·卡纳纳的AI世界

“数字、语言和讲故事是密不可分的。它们是非常相似的话题——有点像鸡和蛋的问题,先有哪一个?但我认为它们是结合在一起的。”作者迈克尔•Kanaan

迈克尔·卡南(Michael Kanaan)著t-minus ai将数据与故事叙述、旧技术与新想法、预期结果与意外结果相结合。他分享了一个真正引人注目的故事是如何根植于真实的事实和经验,并受到其启发的。因此,特别是在讲述一个创新或品牌故事时,数字和语言是密切相关的。他们都在为故事提供信息并塑造故事,在某种程度上他们应该走到一起。这样,故事就有能力让人们接触到既有趣又适用于他们生活的新信息。收听电台如何AI、伦理和人类经验都相互通知,所有可能的未来,像人工智能可以如何改进,而不是取代人的工作并连接而不是减损社会互动,以及政治前进,而不是使它容易扰乱议程。

迈克尔·卡南(Michael Kanaan)是T-minus ai:人类对人工智能的倒计时和全球权力的新追求.他是美国空军五角大楼总部人工智能的第一任主席。在该职位上,他撰写并指导人工智能技术和全球业务机器学习活动的研究、开发和实施策略。他是《福布斯》30 30岁以下目前是空军/麻省理工学院人工智能的业务总监。他的第一本书今年夏天出现了,关于易变化的人工智能世界的全球影响,他从一个以人以人为本的角度解释了AI的现实,这很容易理解。

听播客

成绩单

本集以来自Untold Content和data +Science的数据讲故事培训为动力。雷竞技raybet提现雷竞技电竞竞猜通过学习数据可视化和技术讲故事的最佳实践,将数据转换为强大的可视化故事。无论你是PowerBI还是Tableau的员工——或者只是想更好地交流你的数据——这个研讨会都会激发你去发现数据背后的故事。学习更多在https://undoldcontent.com/dataStoryTelling雷竞技raybet提现Trining/

凯蒂Trauth泰勒:[00:00:04]欢迎来到《不为人知的创新故事》,在这里,我们将通过不为人知的内容放大不为人知的见解、影响和创新故事。雷竞技电竞竞猜我是主持人,凯蒂·特劳斯·泰勒。

凯蒂Trauth泰勒:(00:00:19)我们的客人今天是Michael Kanaan。他是新书T-minus ai的作者。他也是美国空军,麻省理工学院人工智能加速器的业务总监。迈克尔,我非常感谢你在播客中讨论人工智能和创新讲故事。

Michael Kanaan:(00:00:36)凯蒂,我很乐意和你在一起。

凯蒂Trauth泰勒:[00:00:38]刚才我们第一次打电话时我提到过。但是过去几天我一直在看你的书。我要狼吞虎咽地吃下去。我大概还有10页就能写完,所以你得告诉我结尾是什么。但它是如此强大的阅读,T-Minus AI,它刚刚出版,它真的涵盖了我们需要知道的关于人工智能的一切。甚至对于我们这些在创新社区可能不工作真正亲密或直接与我,只是留下一个非常明确的理解它的含义和构成,什么样的才算是AI和没有什么和我们应该关注。

Michael Kanaan:[00:01:17]是的,这是这本书的目标,我的意思是,我们谈论了一种对我们生活的每一个互动的技术,并且只会在未来几年中增长。所以我想在这本书中做些什么,以一种非常人的轶事格式带来。如果你想谈论人工智能,那么你必须了解一些关于进化或生物学的信息。数字的技能,大小,它们如何影响我们,当然,计算机如何运作的基础知识。语言,我们的大脑学习。然后让我们到达什么是a.i.?它对我们有效吗?因为没有上下文,谈话往往缺乏和缺乏深入的深度,并来自共同的基础,并清楚地了解它是什么。然后我们每天谈论什么。IA在竞争中,IA在商业中,AI在国际关系中。 So I want to break it up into three parts of the book that were told in that way so that regardless of who you are, there’s something individually meaningful to you.

凯蒂Trauth泰勒:(00:02:27)是的,你完成了。我强烈建议所有听过这段对话的人去读这本书。的一件事,我爱,也许这是我的缺陷作为以前的英语教授,是您进入人类心灵,在人类历史上,谈谈如何机器,机器和人工智能必须反映人类智慧和我们如何思考和如何说话。我最喜欢的部分之一是你如何比较数学的起源和讲故事的起源。

Michael Kanaan:(00:02:58)是的,数字、语言和讲故事是密不可分的。它们是非常相似的话题,有点像鸡和蛋的问题,先有鸡还是先有蛋?但我认为它们是共同的。你要做的就是通过讲故事来让人们了解你。人类通过讲故事学习效果最好。这就是我想要表达的。我很高兴你收到了。

凯蒂Trauth泰勒:[00:03:24]是的。当然,这本书还深入探讨了为什么我们应该在全球范围内关注人工智能。谁拥有它?风险、威胁和机遇是什么?所以我想去....在我们深入讨论这些之前,我想先听听你个人关于创新的故事,是什么让你进入了人工智能的世界?好吧,也许这可以追溯到1956年我还没出生的时候,因为人工智能已经是一个长期被讨论和争论的话题。1956年,在达特茅斯,一群非常聪明的人聚在一起,他们看到了未来可能发生的事情,随着机器、数据的兴起,我们可以用这种方式来纪念我们周围的一切。他们给人工智能下了定义,他们说:“计算机执行的任务曾经被认为是属于人类的领域。”当你思考这个定义时,你可以理解,自1956年以来,我们已经将人工智能拟人化了很多,并一直在拖延时间,因为基于这个定义,计算器当然就是人工智能。

凯蒂Trauth泰勒:[00:04:43]正确的。

Michael Kanaan:[00:04:43]然后TI 84加上我们有一个更好的。然后excel。如今,Tableau。

凯蒂Trauth泰勒:(00:04:50)正确的。

凯蒂Trauth泰勒:(00:04:51)我们就一直把问题拖到以后解决。所以对我自己来说,当我们一直在解决这个问题的时候,我来到了国家航空和空间情报中心工作,当时人工智能在我看来是在2011年出现的最后一个(不清楚的措辞)。2011年,我们再次表示人工智能并不存在,但由于我们收集数据、云计算和其他一切能力的提高,计算、架构、新数学和软件的一些进步。对吧?用软件表达的数学。机器学习应运而生。它工作。在那个时候,这就是图像网络的竞争。现在我们的目标是用这个图片,而不是竞争,把一堆图片放到互联网上,从互联网上抓取一些图片,放到某个地方的数据库中。一般来说,就像猫之类的东西,对吧? Because everything on the Internet is cats. And then run the computer against the human. And 2011 was the first time that the computer could outperform the human in these discrete tasks. Voila. Here we are, the machine learning age. Now at the same point in time, in 2011, as I mentioned, I was at the National Air and Space Intelligence Center and I was responsible for a mission called Aces High. And it was a hyperspectral imager. So this is going to get nerdy, but I’ll try to make it common-speak.

凯蒂Trauth泰勒:[00:06:22]我爱它。是的,让我们开始吧。

Michael Kanaan:[00:06:22]好的,所以这是一个高光谱的成像仪。你和我在三种颜色的乐队中看到。螳螂虾看到了八或九个。有一个哲学对话我们可以拥有:“螳螂虾看到了什么我没有?”正确的。

凯蒂Trauth泰勒:(00:06:35)正确的。

Michael Kanaan:[00:06:36]但这台高光谱成像仪,可以看到数百种颜色波段。因此,我们在中东和阿富汗的国家空中和太空情报中心开展行动。我们把它放在一架无人驾驶飞机上。和我们的目标是根据这张图片你收集和来自太阳的光的反射率达到地面因为有太多颜色乐队的收集,然后在某些颜色乐队如果幽灵似地重要,然后你可以推断或识别,材料是什么。想想自制炸药之类的。

凯蒂Trauth泰勒:(00:07:18)是的。

Michael Kanaan:(00:07:18)所以我们的目标是执行这项任务,仅仅是为了拯救美国人的生命,本质上说,“等一下,不要沿着那条街走,因为那里有东西。”

凯蒂Trauth泰勒:[00:07:30]是的。

Michael Kanaan:[00:07:30]我们非常成功。该团队是一个令人难以置信的个人群,拿了这个新的毛绒机器。我们不得不为我们制作它。对吧?大约30人,40人,聪明的思想,真正成功,但同时。这张图片[不清楚的措辞]事情正在发生。虽然它可能会带来一定的时间才能能够提醒人们在地上发生的事情。我对自己说,等。但这个人工智能呢?这肯定会让我们更快或更准确地说。 And in 2011, as is still the case, most people say, I don’t know about that AI thing. it’s not real AI, right? So my love of artificial intelligence truly came from and why I moved down this path is from a place of need. To do something for someone else in the name of customer service, in the name of service in general. And from that point in time, it’s been a nine year long journey to the point we’re at now where I think the world is opening its eyes to its seriousness, its applicability to their everyday life and how it influences them. But when it comes to a story of innovation, that’s a story of artificial intelligence. That’s just my personal story. But when it comes to innovation. We have an innovator’s dilemma that I often think about, the dilemma is I want things to change. I am unhappy with the current state of being or the current state of being could be better. But I’m reminded of a quote, “the limits of my language mean the limits of my world.” So as innovators, we still have to be able to communicate. I think of the idea of taking the ideas of the new and blending it with the techniques of the old, because you can’t just do it alone. And by the way, nobody appreciates just malware in the system, right? Without a goal. So sometimes for innovators, what I think is important is to help yourself, help yourself. Right? To speak that language to, you know, the overused term #OK, boomers. But we have to be able to communicate with them because otherwise it’s just noise in the system. So one common foundation or common denominator to everything was always someone who is a champion, someone alongside of you. And I think that that’s important for innovators to remember that you’re going to stress yourself out unless you’re speaking the language of the people that you want to change. The best way to hack a bureaucracy is to understand a bureaucracy.

凯蒂Trauth泰勒:(00:10:29)是的,绝对。非常感谢你。听到你今天做的工作和一些你完成的成功任务是令人兴奋的。然后一般来说,只是为了听到你对讲故事的观点,它在帮助人们获得买入和牵引力的角色,并且正如你所说,说同一个语言。我真的很感激。我认为你已经分享了几个例子,但是在你的书的第128页和129页中,你有这个漂亮的桌子,你概述了许多不同的部门和人工智能拥有潜力产生影响和创造良好的方式.从制药研发到零售库存和定价和DNA测序和分类或航空航天研究,气候分析的一切。这样的例子不胜枚举。但我很乐意听取更多你最喜欢的创新故事,让你对未来感到兴奋。然后我们也会谈论“黑暗的一面”。

Michael Kanaan:(00:11:28)哦没问题。我很高兴我们要去“轻球”开始,吧?

凯蒂Trauth泰勒:(00:11:33)是的。是的。

Michael Kanaan:(00:11:34)以一种非常独特的方式。人工智能就是创新。当我们在做一个人工智能项目或将其引入我们的组织时,我们的目标很简单:提出新的问题。我们并不总是想到这一点,因为“自动化”和“人工智能”这两个词经常互换使用,人工智能的名声很差。我们认为它将取代我们底层的劳动力。这是不正确的。完全不正确的。你不会有一个成功的人工智能项目。事实上,你应该做的是晋升到你的员工队伍的顶端,你的专业专家,你最优秀的员工。你要做的就是开始从人工智能的角度来看待这个世界,我给你们举个例子。 In your life, you want to think about something you do all the time, right, that you are highly accurate on, right. You have to be accurate with that prediction, with that task, with that due out, with that balancing the budget, the book, whatever it is in your personal life, everyone has them, right? Something that ideally moves at high speed, like quick decisions are made. So high accuracy, high speed. And then the other one, high volumes of data like you’re looking at a lot of stuff. Think about case law, right? And precedents. You know, we have all of these attributes to certain jobs in our lives. So what you want to do is you want to find all these examples or the data that you have put that all together and you say, “wow, I have this highly represented data set that is a lot of examples of what I do.” OK? And then here’s the rule, though. Imagine, if you will, that artificial intelligence isn’t real. It’s not a thing. It’s just this island of I.T. people who are capable of taking on all your tasks. But the rule is you can’t give them directions, only the examples we just talked about. If you do those two things and think with this kind of nuance paradigm shift that I don’t mean to be pedantic in any way. Then you found your AI problems because what ends up happening? You take that representative data, you give it to that software, the imaginary A.I., of course, and what does it do? It illuminates insights. The very purpose of machine learning is to discover human patterns. So when I think about what’s your favorite story on innovation? Well, by asking a new question, it’s simply that. It’s exactly that whole process. And I also like talking about A.I. or innovation in some different ways as well. Often we kind of umbrella everything. Everything is innovation. But it can be a singular noun too. An innovation on the system. A new question that you’re asking. So as it comes to what’s the good of it? I think it can make us be more human. I think we can get out of computer tasks that saturate our lives. Our jobs are too often computer jobs. And by the way, if an AI or automation could replace your job or someone in your workforce, that person shouldn’t be doing that job. Right? That’s not a person-job. So when we talk about the good, it’s all about asking new questions. And I think that’s special, particularly at this moment in time where we need to do that in society. And AI can help us get there.

凯蒂Trauth泰勒:[00:15:25]我爱。非常感谢分享。你可以在书中看到关于人工智能或机器学习的想法,真正驱动它的是我们输入的数据。所以我喜欢的一件事是,你知道,人们,当谈到人工智能时,他们会觉得它是一个非常大的概念,离他们非常遥远,对吧?这是机器人接管世界的想法。在这本书中,你会对比说,这很好。你知道,我并不是说这些对话是无效的,但如果你面临着困难,并且当涉及到AI能够让事情发生的方式时,有迫切的问题需要解决,那么这就是我们需要关注的地方。这就是我们可以利用它或防止它被滥用的地方。

Michael Kanaan:(00:16:14)完全正确。当门口着火时,你就不会太担心远处的闪电了。

凯蒂Trauth泰勒:[00:16:19]是的是的。

Michael Kanaan:(00:16:20)我每天都在讨论,机器人杀手怎么办?那么武器上的人工智能呢?那么X Y呢,不管它是什么,今天人工智能的现状正在创造反乌托邦的社会。这是对人的偏见。这影响了招聘行为,我知道我们没有在视频上。你和我现在在录像里,但这只是在雇佣更多像我这样的白人。

凯蒂Trauth泰勒:[00:16:51]哦,我的天啊。这是书中最有力的例子之一。有。哦,我的天啊。好吧,如果你除了一章什么都看不懂,那就读读《机器中的偏见》我非常喜欢那一章。你能深入研究一下Twitter Tay,微软Tay的例子和亚马逊的招聘吗,或者我可以重复一下,因为我今天早上又读了一遍。我太喜欢了。不是让你为难,但是…

Michael Kanaan:[00:17:14]当然!

凯蒂Trauth泰勒:[00:17:15]但是,你知道的这个想法,这有点进入了事物的“黑暗面”,但是通过机器学习来实现同样的人类偏见,可能是我们投入研究和我们提出的问题。问题,我们造成的方式,对结果收集的数据产生了影响。然后我们的分析也可以充满潜在的偏见。您分享的一个很棒的例子之一是[那]亚马逊在Resumé阅读周围有一种招聘算法。

Michael Kanaan:(00:17:43)是的,艾丽就像在镜子时期看,对吧?我认为,在一定程度上,有一个潜意识的厌恶或厌恶,即使你在镜子里看到更多,即使你不太明白它也是如此。确定。所有它要做的就是反思和制定对当前事态的预测。所以在好的。所以这可能是非常好的,吧?因为有什么区别是,当亚马逊确定他们正在雇用许多年长的白人绅士并且只是偏见,就像他们一样。我的意思是,这就是我们的许多公司现在的样子。我们正试图在社会中过去。但差异是他们被拘留为账户。 Right? The difference is, is that people say, “that is unacceptable and we must change.” When it came to Microsoft Tay, which was, by the way, for people listening, it was a Twitter bot that collected a whole bunch of essentially how we interact as humans and surprise. It was basically the worst of us.

凯蒂Trauth泰勒:(00:18:52)是的。

Michael Kanaan:(00:18:53)对吧?这绝对是我们最糟糕的。

凯蒂Trauth泰勒:[00:18:55]是的。

Michael Kanaan:[00:18:56]非常沮丧……

凯蒂Trauth泰勒:[00:18:58]算法是。是的。驾驶这一类别的算法命名为Tay的算法是她的算法只是基于她将在她的推特上获得的评论,她的推文。因此,在几小时内,泰格推出了种族主义和性别歧视推文,因为那些是第一个或她最早的推文回来的评论。

Michael Kanaan:[00:19:22]她的第一个“你好世界”。对?

凯蒂Trauth泰勒:[00:19:25]是的。

Michael Kanaan:[00:19:25]所以,你知道,我们在这里讨论的是,再次回到重点。机器学习应用程序只是被设计来分析数据和制定预测,而没有我们的指导。但因为它是基于数据的,数据是我们的反映,数据一直都是存在的,对吧?只是现在我们纪念他们就像森林里倒下的树一样。它发出声音了吗?当然了。问题是,现在有了可以记录的东西,我们就有了一切可以记录的东西。因此,如果算法分析仅仅基于数据,这并不意味着它的输出是中立的或客观公平的,因为偏差会反映在我们的数据中。当它们出现时,基于数据的每一个后续策略、分析或预测也会有偏差,这是理所当然的。如果我们根据这些问题的答案来做决定,那么潜在的偏见当然会,在我们的生活中永远存在。 And most of us do believe at the core of the matter that we’re fully aware and consciously in control of bias, inclinations and opinions, and we can intentionally include or exclude them however we see fit. During a never-ending day of decisions like not walking in front of a car. You’re biased against that. That’s not a good idea. But we’re not. Or the fact I don’t like olives. We’re unable to separate ourselves from our biases or our biases from ourselves, to get philosophical. And we’re not even aware of the prejudices we hold and we’re unaware of the many ways they influence our behavior in answering those questions. So regardless of how objective, unbiased or enlightened each of us think we are, we have tendencies in case aversions and distaste. It defines who we are. So the point is, when we’re moving forward on an AI project in your organization or anyone, you have to have representation of every one to ask those questions up front. What could be the tertiary side effects of this? And I think that’s what is special and why AI should be a topic for everyone. The future rock stars in artificial intelligence are ethicists, lawyers, teachers, parents. Right? So many more people need to be involved at the beginning. And it’s not just for those I.T. people because the questions we’re trying to solve and the questions we ask are really important. Now, back to the point, though, was Tay or the Amazon hiring bad in the long term? I don’t think so. I don’t think it was necessarily bad. I think it was a good thing. I think it illuminated something that perhaps we thought was true, we found out is true and we changed. They still do not have that algorithm in practice and Tay doesn’t exist anymore.

凯蒂Trauth泰勒:[00:22:29]正确的。

Michael Kanaan:[00:22:29]在其他国家,虽然......而且我正在看着你,中国和俄罗斯和其他一些地方,他们不会说不,他们不会说这是不公平的。顺便说一下,我应该重新上。我正在看着你,中国共产党。你,俄罗斯联邦,对吗?不是中国人,而不是俄罗斯公民。

凯蒂Trauth泰勒:(00:22:53)是的是的。

Michael Kanaan:(00:22:54)他们没有在这些问题中发言。所以我认为这将是......这将是一个令人纵横的权利,因为我们在这里没有人持有人来占据账户,至少在我们可以沟通的程度上。提出更好的问题,这样我们就不会变得更像。这就是......我认为这是现在的特殊情况。

凯蒂Trauth泰勒:[00:23:19]绝对地。You know, the fact that when the hiring algorithm at Amazon, when it was discovered that it was pushing more women’s resumes to the side and elevating men’s resumes just because the machine learning looked at the history of data of hiring and saw that there were more male resumes, and therefore it interpreted that as being desirable and it perpetuated it – not unlike the way that we did as humans – for decades and decades. And so… But you’re right. I think there’s something really powerful about that metaphor you said, holding a mirror up to ourselves. And the great end result of that is that Amazon no longer uses that algorithm or if they would build one in the future, they will try to accommodate or change based on those biases. And you’re bringing us to a… Perhaps the most critical part of your book, which is when companies and institutions are utilizing artificial intelligence in a globalized way, even if those companies are sort of headquartered in different countries, the ways that that innovation is put to use in different cultures and contexts can differ. And the rules and the regulations protecting against security and safety threats are also different. So can you speak to that aspect of A.I. and what we should be paying attention to?

Michael Kanaan:(00:24:47)好吧,这是一个问题,我认为我们当然是跳跃一点点是谁真的负责制作这些选择?是制作A.I的开发人员。然后把它放在github上,然后有人做错了什么?对吧?因为它的世界观点或它被喂养的数据并不代表其影响范围。我认为它像x和y轴一样。对吧?所以在y轴上,我们有那个标签为世界观,对吧?或数据。因为数据类似于机器的体验,所以这就是我们如何从经验中学到。 On the X axis. You would have its scope of application. How many people is that affecting in which way? And is its worldview fair? Representative of the number of people it’s impacting? So how would this play out in real life? The question is, I certainly don’t want an Alexa or a Google Home in my home that was only trained on Southern white gentlemen or people only from Northern California because its scope of application is broader than that. It’s in everyone’s home. Now, if it was just, you know, this wouldn’t be an AI solution, but like a telephone switch operator or something. Right, then fine. Maybe it’s worldview doesn’t need to be very large to perform that action. So when you start kind of mapping out where things fall on this X and Y axis while we deal with, you know, explain-ability and all these anthropomorphized words of, well, how … Why did AI make that decision and how do we de-bias things and whatever? At least we can start saying, “no. I think that’s fair to its scope of influence or scope of impact.” Right? And then when it comes to well, then the question is when it’s used poorly, whose fault? The person who made it, the company who owns it, you know, on whatever platform or software they have? Is it the government? Sometimes when we talk about A.I., it’s like we throw the kitchen sink and the whole kitchen out the window.

数据讲故事的广告

凯蒂Trauth泰勒:(00:27:07)正确的。

Michael Kanaan:[00:27:08]如果你用锤子杀了人,这不是艾斯五金公司或百得公司或制造锤子的人的错。这是你。重要的是你用它做什么。而是你的组织用它做了什么。对吧?我们必须对这些事情负责。所以你可以看到,在不同的地方,人们有不同的偏见。同样,偏见不一定是坏事。你不想消除所有的偏见,对吧?如果我不喜欢橄榄,我不想偏离我的算法,我不喜欢橄榄,我开始做填满橄榄的菜。 Right? We just have to make sure that it’s fair. But what I think is interesting and what we as citizens. And we as a government do have a role to play. So a thought experiment, let’s say you and I, Katie, are at one of these really large Fortune 500 publicly-traded companies, right? And the conversation is, well, we need to be morally and ethically and legally sound with artificial intelligence. That’s the right thing to do. And I want to commend all these companies and their ethics boards. It’s, I mean, truly, bravo. At the same time, let’s imagine we’re in that room, though. So you probably have 10 or 15 really, really awesome bright people sitting there saying, I want to do the right thing. Inevitably, you get about three minutes into the conversation, like we have here, and it leads to well, we have to share that data so it can be representative of those people, we have to share that algorithm so that we get rid of this whole “you’re in Apple, I’m a droid.” Right? So that we can represent all and be ethically sound and do the right thing. And inevitably, in that room is also general counsel from a really reputable institution like Stanford Law. Attorney sits back there, raises his or her hand and says, hold on one second. You have a fiduciary responsibility to your shareholder not to do that. Right? I mean, because that’s your intellectual property. So as the conversation moves on, inevitably our own structure in some ways limits us. But who do we have a fiduciary responsibility to as citizens and as a government? Everyone. To everyone out there. So it calls for this reinvigoration of that conversation. Now, let’s be clear as well, though, very quickly, you could say, well, yeah, in that case, if we want AI to be fair, there shouldn’t be an Apple. There shouldn’t be a droid. There should just be one. Then all of a sudden, you start looking like China with one platform like WeChat, where people don’t have options. So you can see the slippery slope that can happen very, very, very quickly. What it really means at the end of the day to the question you asked is. You shouldn’t throw the whole kitchen out the window, right? There’s still… There are still frameworks in place that work, even though we said the AI word. Right? It’s OK. But, I think it’s far time that we start, you know, carving out some new square pegs for or square holes for square pegs, not trying to fit it in, and that comes from being informed or at least generally aware of the topic itself and the tertiary effects or secondary effects that could happen from doing one of these projects. And I think that kind of wraps up, “well, what do we need to think about right now?”

凯蒂Trauth泰勒:[00:31:09]当然,公司如何处理他们的数据以及这些行为对谁有利或有害的问题是非常复杂的,当然,每个公司都有所不同。

Michael Kanaan:[00:31:20]我们忘记了,你知道,当你没有付出什么东西时,你是产品,对吗?我的意思是,如果你没有为此付出代价,你就是别人的产品。

凯蒂Trauth泰勒:(00:31:29)我们的确是。我的意思是,我们放弃了很多,以换取个性化。正确的。欢迎。欢迎来到亚马逊,凯蒂。以下是一些建议。

Michael Kanaan:[00:31:38]是的,这里有一些建议。这是你在抖音上的兔脸。太棒了。和欣赏它。正确的。这是很棒的能力。但是想想那个对话,让某人交换。在他们的生活中有免费的能力,这是我们已经习惯了的,因为在链的某个地方,你在通知一个算法,并在中国阻止维吾尔穆斯林,对吗?我的意思是,如果你在这个平台上,例如-à-vis,人工智能正在对你进行训练,变得更加健壮,你可以看到,那条长链是如雷竞技raybet提现何结束的,这是一个智力上的艰难论点。你必须真正理解这是怎么发生的。

凯蒂Trauth泰勒:[00:32:29]你能不能利用这个例子,你能不能为那些正在听的人,那些不太熟悉它的含义的人深入一点?

Michael Kanaan:(00:32:38)确定。我们讨论了,在多大程度上,我们得到了更多的数据,意味着更多的算法,更健壮,更有效的测试。所以当你坐在平台上,比如,你把兔子脸放在你的脸上。这就是计算机视觉,对吧?我的意思是,当你打开手机的时候,人工智能就在我们身边。它会通过面部识别识别你的脸,这样你的手机就有了安全和隐私。这是人工智能。但是让我们想象一下,也许这是一个你不太认同的公司的手机。正确的。他们看待世界和文化的方式与你不同。 Well, interestingly enough, remember back to that data point: you’re training that artificial intelligence. Right? You’re making it more robust and then you have to ask the question, well, tell me what perhaps a company like Baidu, Alibaba, Tencent or whatever it is, is doing with that stuff. And you might find out, after you go down the long chain, well, I actually don’t like that. That’s compromising someone else in the world. So that gets to the point, too, that everyone should be involved in a conversation as a consumer to a developer, to a supplier. You’re a part of the A.I. chain in some way.

凯蒂Trauth泰勒:[00:34:07]是啊。绝对地。

Michael Kanaan:[00:34:08]这就是为什么我们想要拥有,你知道,关于这个主题的基本强大,智能对话。

凯蒂Trauth泰勒:(00:34:17)绝对地。非常感谢您指出这一点。作为日常消费者或公民,我们在这场比赛中担任创新团队。我们在这场比赛中有股份。就像你说的那样,也许这对我们创造了一些东西后的每一个滥用或用例都不会承担全部责任。But to have that conversation and to our best knowledge, try to anticipate that as innovation leaders and communicate that up the chain as we try to get buy-in for new projects and ideas, it’s quite critical that you spend a little bit of time at least articulating what those other use-cases might be, or at least seeking, you know, the opinions of experts who can help you think through that. And we can’t always know that’s what’s so challenging. But we do our best to present, you know, to work with the ethics we have in front of us, the decisions we have in front of us.

Michael Kanaan:(00:35:05)你说得对。我们不能总是知道。那没关系。我们会犯错误。问题是:你有合适的意图吗?你做了尽职调查吗?你能站在某人面前,嗯,嗯,有副作用吗?我没有意识到,但这就是我们如何减轻它并想到它,现在我们将改变。那没关系。没关系。 I – dive in, dive into using AI in safe spaces. If you’ve got a lot of Excel files, you can use machine learning.

凯蒂Trauth泰勒:(00:35:41)是的,一点没错。

Michael Kanaan:[00:35:42]如果你有很多金融文件,你可以使用它。每个人都有适合自己的东西。

凯蒂Trauth泰勒:[00:35:47]肯定。我知道我们谈论了很多关于数据和数字的话题,但在Untold,我们也经常谈论数据故事。你能和我们分享一下你对讲故事的看法吗,讲故事在人工智能中扮演的角色和它的成功与否?

Michael Kanaan:(00:36:04)讲故事是最重要的事情之一,我们可以沟通故事的人类,我们可以想象自己在别人的脚下,在他人的鞋子中,不一定描绘它,或经历它。例如,如果我向你描述,有一个女人沿着一桶水跑下来。对吧?它在无处不在地泼了。也许你从未这样做过。也许你有。但是你就像,哦,我可以想象,对吗?讲故事创造了买入,它创造了我们可以理解的经历,并对我们有意义。而且我认为故事,如阅读,也很重要。我的意思是致力于阅读和讲故事。 Right? So I think back to the seminal books in my life. They’re works like, If You Give a Mouse a Cookie, Where the Wild Things Are, or maybe Good Night Moon. And I know, I know, I’m referencing some children’s books, but don’t worry, I’m going somewhere.

凯蒂Trauth泰勒:(00:37:13)现在,我的职业生活中的一半是我们住在家里的家庭住宅,从家里工作就是你刚才用我的一个,四个和五岁的孩子们所说的所有书籍。

Michael Kanaan:(00:37:23)这些都是最好的书,对吧?那些是我的最爱。

凯蒂Trauth泰勒:[00:37:26]哦耶。

Michael Kanaan:[00:37:26]但是还有像[Carl] Sagan的Cosmos一样的书籍,[Stephen]霍金的宇宙简而言之。Or, you know, when I was 10 years old reading Brian Greene’s The Elegant Universe over and over again, maybe most recently [Yuval Noah] Harari’s Sapiens or something, there are favorites like [Leo] Tolstoy, Virginia Woolf, [Aldous] Huxley, and so many more. And these books and storytelling have something in common. People, since the dawn of language learning and eventual[ly] writing have debated and discussed consciousness, theories of physics, biology, social realities, technology and all the rest of the things that constitute the human experience. The average person hears about them. Shoot, I mean, we experience them every day and we know of those words, but not always what those words mean. Essentially, for every topic, they’re brought to light but storytelling brings something to life and then inspires more. And I think that’s a distinction with a difference. So I look back and think of learning, which is really what we’re talking about here, right? Learning through storytelling. I think it’s centered around dialogues. Maybe that’s with someone else or others, but maybe that’s with yourself too, the internal one, that’s really important. And for me, when we talk about A.I., the concepts of consciousness, experience, social order, biology, the whole human story is brought together in the story of A.I.. Now, when we talk about innovation, right. Which is – that’s my personal innovation – we want to tell stories so that they can experience that idea. Take the aspects of it that mean something to them, right? Don’t run down the street with that bucket full of water, walk, right? That’s a lesson, you know, that we can take away, just like when we tell the story of, you know, an innovative group or the creation of the Post-it notes or whatever it may be for you, there are things you can take away and that is the value of storytelling to innovation.

凯蒂Trauth泰勒:(00:39:37)谢谢你。是的,一点没错。我真的很欣赏那些积分,它是......绝对,它是关于买入体验,了解人类表情的能力。只是为了包装我们的谈话,因为我知道我们可以整天谈谈。这一直很精彩,我很感激。它再次对我来说是令人着迷的,这对镜子的想法以及我们需要在考虑使用A.I的应用程序创建的应用程序时偶然地思考一点点。这是一点心态的这种变化。真的,很有意思。这是创新者必须进入的有趣位置,因为一方面,您需要考虑我们如何在学习计算机中创造适当的情况?这与人类学习的方式非常不同,至少在某种程度上右边? Around data and putting in this data-set. And then we also still have to story-tell to other humans to get buy-in for those efforts and to get feedback and to refine the approach and think about the impact it could have and how it’s going to help better people’s lives in whatever way that means. And so it’s not an easy job to be someone who is innovating with AI right now. But I think that not just being really smart, working with data and building algorithms, also being able to be a storyteller is what I’m hearing from you. That’s all still critical to the success of AI innovation.

Michael Kanaan:[00:41:02]这是如此文艺复兴,对吧?你现在必须成为文艺复兴的女人或男人。

凯蒂Trauth泰勒:[00:41:12]是的。

Michael Kanaan:[00:41:12]整个套件和桌布,写作和讲故事,技术熟练程度,或者至少在某种程度上,您可以看到正在发生的事情以及在您提出这方面的情况。我在想讲故事的潜在主题。这是来自爱因斯坦的报价,他经常归功于说,“除非你可以向你的祖母解释,否则你真的不明白一些东西。”我认为这是真的。但是,如果爱因斯坦知道自己的祖母,他会在更精确的格言中稍微改变他的话:你的祖母很可能是你遇到的最聪明的人。因此,如果她不明白你的解释,那么它确信也没有其他人。

凯蒂Trauth泰勒:(00:41:54)这是正确的。我爱它。谢谢您重新[措辞]……

Michael Kanaan:[00:41:56]这可以成为我们生活的方方面面,商业的方方面面的主题,这种联系和讲故事的能力。

凯蒂Trauth泰勒:[00:42:08]极好的。谢谢你。非常感谢你,迈克尔。我真的,真的很享受你的书。我也知道听众,我真的很喜欢这个谈话。并感谢您在头上翻转。这真的很可爱。我很高兴我们可以结束这个注意事项。

Michael Kanaan:[00:42:23]谢谢,凯蒂。今天和你在一起真是太棒了。

凯蒂Trauth泰勒:[00:42:25]下次再聊。

Michael Kanaan:[00:42:26]再见。

凯蒂Trauth泰勒:(00:42:29)感谢收听本周的节目。一定要在社交媒体上关注我们,并在对话中加入你的声音。你可以在Untold Content找雷竞技电竞竞猜到我们。

你可以听更多的剧集不为人知的创新故事播客

*采访不是对个人或企业的认可。

发表评论

您的电子邮件地址将不会被公布。必填字段已标记*