返回首页

JanelleShane_2019-_人工智能的危险比你想象中更奇怪_-

So, artificial intelligence is known for disrupting all kinds of industries. 人工智能, 以能颠覆所有行业广为人知。
artificial intelligence:n.人工智能; disrupting:v.扰乱;使中断;打乱;(disrupt的现在分词)
What about ice cream? 那冰淇淋呢?
What kind of mind-blowing new flavors could we generate with the power of an advanced artificial intelligence? 我们是否能利用先进的人工智能 生成令人震惊的新口味呢?
mind-blowing:adj.使兴奋的;引起幻觉的; flavors:n.风味调料(flavor复数);v.添加味道(flavor的三单形式); generate:v.产生;引起; advanced:adj.先进的; v.前进; (advance的过去式和过去分词形式)
So I teamed up with a group of coders from Kealing Middle School to find out the answer to this question. 我和 Kealing 中学的程序员组了个队 想要找到答案。
teamed up with:与…合作,与…协作; coders:n.编码器;编码员; Middle School:n.初中;(英国为9到13岁儿童所设的)中间学校;
They collected over 1,600 existing ice cream flavors, and together, we fed them to an algorithm to see what it would generate. 他们收集了超过 1600 种 现有的冰淇淋口味, 接着我们一起把这些口味输入 到算法中看看会有什么结果。
And here are some of the flavors that the AI came up with. 接下来给大家展示一些 人工智能所想到的口味。
[Pumpkin Trash Break] 【南瓜垃圾破裂】
Trash:n.垃圾;废物;v.丢弃;修剪树枝;
(Laughter) (笑声)
[Peanut Butter Slime] 【花生酱稀泥】
[Strawberry Cream Disease] 【草莓奶油病】
(Laughter) (笑声)
These flavors are not delicious, as we might have hoped they would be. 这些口味听起来并没有 我们想象中美味。
So the question is: What happened? 所以问题来了:怎么回事?
What went wrong? 到底哪里出问题了?
Is the AI trying to kill us? 人工智能是想要干掉我们?
Or is it trying to do what we asked, and there was a problem? 还是说它努力想要回应 我们的要求,但是却出问题了?
In movies, when something goes wrong with AI, it's usually because the AI has decided that it doesn't want to obey the humans anymore, and it's got its own goals, thank you very much. 在电影中,当人工智能出了错, 通常是因为它们决定 再也不要听从人类的指令, 它开始有了自己的目标, 不劳驾人类了。
obey:v.遵守;服从;顺从;
In real life, though, the AI that we actually have is not nearly smart enough for that. 然而现实生活中, 我们现有的人工智能 还没达到那样的水平。
It has the approximate computing power of an earthworm , or maybe at most a single honeybee , and actually, probably maybe less. 它的计算能力大概跟 一条小虫子差不多, 又或者顶多只是一只小蜜蜂, 实际上可能更弱。
approximate:v.近似;使…接近;粗略估计;接近于;adj.[数]近似的;大概的; computing:n.计算;计算机技术;信息处理技术;v.计算;求出;(compute的现在分词) earthworm:n.蚯蚓; honeybee:n.[蜂]蜜蜂;
Like, we're constantly learning new things about brains that make it clear how much our AIs don't measure up to real brains. 我们持续从大脑学习到新事物, 使我们越来越清楚人工智能 与真正的大脑之间的距离。
constantly:adv.不断地;时常地; measure up to:符合;达到;
So today's AI can do a task like identify a pedestrian in a picture, but it doesn't have a concept of what the pedestrian is beyond that it's a collection of lines and textures and things. 现在人工智能所达到的大体就是 在图片中识别出行人的程度, 但是它并没有 对于行人的概念, 除此之外它所做的只是 收集线条,质地之类的信息。
identify:v.识别:鉴定:确认:发现: pedestrian:adj.徒步的;缺乏想像力的;n.行人;步行者; textures:n.纹理;材质(texture的复数);v.使具有某种结构(texture的三单形式);
It doesn't know what a human actually is. 但是它并不知道人类到底是什么。
So will today's AI do what we ask it to do? 那么现在的人工智能 能否达到我们的要求?
It will if it can, but it might not do what we actually want. 能力允许的情况下它会, 但是它所做的可能 并不是我们真正想要的。
So let's say that you were trying to get an AI to take this collection of robot parts and assemble them into some kind of robot to get from Point A to Point B. 假设你想要用人工智能 利用一堆机器人的零件 组装成一个机器人 从 A 点移动到 B 点。
assemble:vt.集合,聚集;装配;收集;vi.集合,聚集;
Now, if you were going to try and solve this problem by writing a traditional-style computer program, you would give the program step-by-step instructions on how to take these parts, how to assemble them into a robot with legs and then how to use those legs to walk to Point B. 如果你想要通过编写 一个传统的计算机程序 来解决这个问题, 你需要输入一步步的指令, 指示它怎样拿起零件, 怎样把这些零件安装成 一个带脚的机器人, 以及如何用脚走到 B 点。
step-by-step:adj.按部就班的;
But when you're using AI to solve the problem, it goes differently. 但是当你利用人工智能 来解决这个问题的时候, 情况不太一样。
You don't tell it how to solve the problem, you just give it the goal, and it has to figure out for itself via trial and error how to reach that goal. 你不用告诉它 要怎样解决问题, 你只需要给它一个目标, 它会通过试错 来解决这个问题, 来实现目标。
via:prep.通过;经由;n.道路;[医]管道;
And it turns out that the way AI tends to solve this particular problem is by doing this: it assembles itself into a tower and then falls over and lands at Point B. 结果是,貌似人工智能在 解决这一类问题的时候 会这么做: 它把自己搭建成 一座塔然后倾倒, 最后在 B 点落下。
assembles:vt.集合,聚集;装配;收集;vi.集合,聚集;
And technically, this solves the problem. 从技术的层面上看,的确解决了问题。
Technically, it got to Point B. 从技术上来说的确到达了 B 点。
The danger of AI is not that it's going to rebel against us, it's that it's going to do exactly what we ask it to do. 人工智能的危险 不在于它会反抗我们, 而是它们会严格按照 我们的要求去做。
rebel:n.反政府的人;叛乱者;造反者;反抗权威者;v.造反;反抗;背叛;adj.造反的;
So then the trick of working with AI becomes: 所以和人工智能共事的技巧变成了:
How do we set up the problem so that it actually does what we want? 我们该如何设置问题才能让它 做我们真正想做的事?
So this little robot here is being controlled by an AI. 这一台小机器人 由人工智能操控。
The AI came up with a design for the robot legs and then figured out how to use them to get past all these obstacles . 人工智能想到了一个 机器人脚部的设计, 然后想到了如何 利用它们绕过障碍。
obstacles:n.障碍;障碍物;阻碍;(obstacle的复数形式)
But when David Ha set up this experiment, he had to set it up with very, very strict limits on how big the AI was allowed to make the legs, because otherwise ... 但是当大卫·哈 在做这个实验的时候, 他不得不对人工智能 容许搭建起来的脚 设立非常、非常严格的限制, 不然的话...
(Laughter) (笑声)
And technically, it got to the end of that obstacle course . 从技术上说,他的确 到达了障碍路线的终点。
obstacle course:n.障碍赛跑场地;艰险;重重困难;近战训练场;
So you see how hard it is to get AI to do something as simple as just walk. 现在我们知道了,仅仅是让人工智能 实现简单的行走就有多困难。
So seeing the AI do this, you may say, OK, no fair, you can't just be a tall tower and fall over, you have to actually, like, use legs to walk. 当看到人工智能这么做的时候, 你可能会说,这不公平。 你不能只是变成 一座塔然后直接倒下, 你必须得用脚去走路,
And it turns out, that doesn't always work, either. 结果是, 那往往也不行。
This AI's job was to move fast. 这个人工智能的任务是快速移动。
They didn't tell it that it had to run facing forward or that it couldn't use its arms. 他们没有说它应该面向前方奔跑, 也没有说不能使用它的手臂。
So this is what you get when you train AI to move fast, you get things like somersaulting and silly walks. 这就是当你训练人工智能 快速移动时所能得到的结果, 你能得到的就是像这样的 空翻或者滑稽漫步。
somersaulting:vi.翻筋斗;n.筋斗;(意见,观点,态度等)180度的转变;
It's really common. 太常见了。
So is twitching along the floor in a heap. 在地板上扭动前进 也是一样的结果。
twitching:n.颤搐;v.使抽搐;猛拉;夺走(twitch的ing形式);
(Laughter) (笑声)
So in my opinion , you know what should have been a whole lot weirder is the " Terminator " robots. 在我看来,更奇怪的 就是“终结者”机器人。
in my opinion:在我看来;我认为; weirder:怪诞的(weird的比较级);神秘而可怕的;超然的;古怪的; Terminator:n.终结者;终止子;明暗界限;
Hacking "The Matrix " is another thing that AI will do if you give it a chance. 要是有可能的话,人工智能 还真会入侵“黑客帝国 。
Hacking:v.黑客行为;砍;劈;猛踢;(hack的现在分词) Matrix:n.[数]矩阵;模型;[生物][地质]基质;母体;子宫;[地质]脉石;
So if you train an AI in a simulation , it will learn how to do things like hack into the simulation's math errors and harvest them for energy. 如果你用仿真环境 训练一个人工智能的话, 它会学习如何入侵到 一个仿真环境中的数学错误里, 并从中获得能量。
simulation:n.仿真;模拟;模仿;假装;
Or it will figure out how to move faster by glitching repeatedly into the floor. 或者会计算出如何通过 不断地在地板上打滑来加快速度。
glitching:n.小故障;失灵;[电子]短时脉冲波干扰; repeatedly:adv.反复地;再三地;屡次地;
When you're working with AI, it's less like working with another human and a lot more like working with some kind of weird force of nature. 当你和人工智能一起工作的时候, 不太像是在跟另一个人一起工作, 而更像是在和某种 奇怪的自然力量工作。
And it's really easy to accidentally give AI the wrong problem to solve, and often we don't realize that until something has actually gone wrong. 一不小心就很容易让人工 智能去破解错误的问题, 往往直到出现问题 我们才察觉到不妥。
accidentally:adv.意外地:偶然,偶然地;
So here's an experiment I did, where I wanted the AI to copy paint colors, to invent new paint colors, given the list like the ones here on the left. 所以我做了这样的一个实验, 我想要让人工智能 利用左边的颜色列表 复制颜料颜色, 去创造新的颜色。
And here's what the AI actually came up with. 这就是人工智能想到的结果。
[Sindis Poop , Turdly, Suffer, Gray Pubic] 【辛迪斯粪便,如粪球般, 受难,灰色公众】
Poop:n.船尾;傻子;内幕消息;v.使精疲力尽;使船尾受击;
(Laughter) (笑声)
So technically, it did what I asked it to. 基本上, 它达到了我的要求。
I thought I was asking it for, like, nice paint color names, but what I was actually asking it to do was just imitate the kinds of letter combinations that it had seen in the original . 我以为我给出的要求是, 让它想出美好的颜色名, 但是实际上我让它做的 只是单纯地模仿 字母的组合, 那些它在输入中见到的字母组合。
imitate:v.模仿;仿效;模仿(某人的讲话、举止);作滑稽模仿; combinations:n.[数]组合;制品(combination的复数);合谱; original:n.原件;原作;原物;原型;adj.原始的;最初的;独创的;新颖的;
And I didn't tell it anything about what words mean, or that there are maybe some words that it should avoid using in these paint colors. 而且我并没有告诉它 这些单词的意思是什么, 或者告诉它也许有些单词 不能用来给颜色命名。
So its entire world is the data that I gave it. 也就是说它的整个世界里 只有我给出的数据。
Like with the ice cream flavors, it doesn't know about anything else. 正如让它发明冰淇淋的口味那样, 它除此之外一无所知。
So it is through the data that we often accidentally tell AI to do the wrong thing. 也就是通过数据, 我们常常不小心 让人工智能做错事。
This is a fish called a tench . 有一种叫丁鲷的鱼,
tench:n.鲤鱼(欧洲的一种淡水鱼);
And there was a group of researchers who trained an AI to identify this tench in pictures. 一群研究者尝试过 训练人工智能去 识别图片里的丁鲷。
But then when they asked it what part of the picture it was actually using to identify the fish, here's what it highlighted . 但是当他们试图搞清 它到底用了图片的 哪个部分去识别这种鱼, 这是它所显示的部分。
highlighted:adj.突出的;v.使显著;照亮(highlight的过去分词);
Yes, those are human fingers. 没错,那些是人类的手指。
Why would it be looking for human fingers if it's trying to identify a fish? 为什么它会去识别人类的手指, 而不是鱼呢?
Well, it turns out that the tench is a trophy fish, and so in a lot of pictures that the AI had seen of this fish during training, the fish looked like this. 因为丁鲷实际上是一种战利品鱼, 所以人工智能在被训练时, 看过的大多数照片中 鱼都长这样。
trophy:n.奖品,奖杯;奖;纪念品,战利品;adj.炫耀的;摆阔的;招摇的;
(Laughter) (笑声)
And it didn't know that the fingers aren't part of the fish. 而人工智能并不知道原来 手指并不是鱼的一部分。
So you see why it is so hard to design an AI that actually can understand what it's looking at. 现在你们应该能想象, 设计一个能真正懂得 自己在做什么的人工 智能是多么困难。
And this is why designing the image recognition in self-driving cars is so hard, and why so many self-driving car failures are because the AI got confused . 这也就是为什么 给无人驾驶汽车 设计图像识别技术那么困难, 导致无人驾驶失败的原因 就是,人工智能迷糊了。
recognition:n.识别;认识;承认;认可; self-driving:自驾; confused:adj.困惑的; v.使糊涂; (confuse的过去分词和过去式)
There was a fatal accident when somebody was using Tesla's autopilot AI, but instead of using it on the highway like it was designed for, they used it on city streets. 有人在使用特斯拉的 自动驾驶功能时发生了特大事故, 因为这个人工智能是 为上高速路而设计的, 结果车主居然开到市内街道上。
fatal:adj.致命的;灾难性的;毁灭性的;导致失败的; autopilot:n.[航]自动驾驶仪(等于automaticpilot);
And what happened was, a truck drove out in front of the car and the car failed to brake. 结果是, 一辆卡车突然出现在轿车前面, 而轿车没有刹车。
Now, the AI definitely was trained to recognize trucks in pictures. 当然这个人工智能受过训练, 能识别图片中的卡车。
definitely:adv.清楚地,当然;明确地,肯定地; recognize:v.认识;认出;辨别出;承认;意识到;
But what it looks like happened is the AI was trained to recognize trucks on highway driving, where you would expect to see trucks from behind. 但是当时的情况看起来, 人工智能接受的训练是 识别行驶在高速路上的卡车, 理论上你看到的应该是卡车的尾部,
Trucks on the side is not supposed to happen on a highway, and so when the AI saw this truck, it looks like the AI recognized it as most likely to be a road sign and therefore, safe to drive underneath . 而侧面对着你的卡车 是不会出现在高速路上的, 所以当人工智能看到这辆卡车的时候, 可能把卡车认作一个路标, 因此,它判断 从下面开过去是安全的。
supposed:adj.误信的;所谓的;v.认为;假设;设想;(suppose的过去分词和过去式) recognized:v.认识;认出;辨别出;承认;意识到;(recognize的过去分词和过去式) road sign:n.路标; underneath:prep.在…的下面;在…的支配下;n.下面;底部;adj.下面的;底层的;
Here's an AI misstep from a different field. 接下来是人工智能在 另一个领域的错误示例。
misstep:n.失足;过失;踏错;失策;vi.失足;走上歧途;
Amazon recently had to give up on a résumé-sorting algorithm that they were working on when they discovered that the algorithm had learned to discriminate against women. 亚马逊最近不得不放弃 一个他们已经开发了一段时间 的简历分类的算法, 因为他们发现这个算法 竟然学会了歧视女性。
Amazon:亚马逊;古希腊女战士; recently:adv.最近;新近; discriminate against:歧视;排斥;
What happened is they had trained it on example résumés of people who they had hired in the past. 原因是当他们把过去招聘人员的简历 用作人工智能的训练材料。
résumés:n.简历;
And from these examples, the AI learned to avoid the résumés of people who had gone to women's colleges or who had the word "women" somewhere in their resume , as in, "women's soccer team" or "Society of Women Engineers." 从这些素材中,人工智能学会了 怎样过滤一些应聘者的简历, 那些上过女子大学的 或者是那些含有 “女性”字眼的简历, 比如说“女子足球队” 或者“女性工程师学会”。
resume:n.简历;v.继续;重返;
The AI didn't know that it wasn't supposed to copy this particular thing that it had seen the humans do. 人工智能并不知道自己 不应该复制他所见过的 人类这种特定的行为。
And technically, it did what they asked it to do. 从技术层面上说, 它的确按要求做到了。
They just accidentally asked it to do the wrong thing. 只是开发者不小心 下错了指令。
And this happens all the time with AI. 这样的情况在人工智能领域屡见不鲜。
AI can be really destructive and not know it. 人工智能破坏力惊人且不自知。
destructive:adj.破坏的;毁灭性的;有害的,消极的;
So the AIs that recommend new content in Facebook, in YouTube, they're optimized to increase the number of clicks and views. 就如用于脸书和油管上 内容推荐的人工智能, 它们被优化以增加 点击量和阅览量。
recommend:v.推荐;介绍;劝告;建议;使受欢迎; content:n.内容,目录;满足;容量;adj.满意的;vt.使满足; optimized:adj.最佳化的;尽量充分利用;
And unfortunately , one way that they have found of doing this is to recommend the content of conspiracy theories or bigotry . 但是不幸的是,它们实现 目标的其中一个手段, 就是推荐阴谋论或者偏执内容。
unfortunately:adv.不幸地; conspiracy:n.阴谋;共谋;阴谋集团; bigotry:n.偏执;顽固;盲从;
The AIs themselves don't have any concept of what this content actually is, and they don't have any concept of what the consequences might be of recommending this content. 人工智能本身对这些内容没有概念, 也根本不知道推荐这样的内容 会产生怎样的后果。
consequences:n.后果,结果;影响(consequence的复数); recommending:v.推荐,举荐;介绍;劝告;建议;使受欢迎;(recommend的现在分词)
So, when we're working with AI, it's up to us to avoid problems. 所以当我们与人工智能 一起工作的时候, 我们有责任去规避问题。
And avoiding things going wrong, that may come down to the age-old problem of communication, where we as humans have to learn how to communicate with AI. 规避可能出错的因素, 这也就带出一个 老生常谈的沟通问题, 作为人类,我们要学习 怎样和人工智能沟通。
age-old:adj.古老的;由来已久的;
We have to learn what AI is capable of doing and what it's not, and to understand that, with its tiny little worm brain, 我们必须明白人工智能 能做什么,不能做什么, 要明白,凭它们的那点小脑袋,
capable:adj.能干的,能胜任的;有才华的;
AI doesn't really understand what we're trying to ask it to do. 人工智能并不能完全明白 我们想让它们做什么。
So in other words, we have to be prepared to work with AI that's not the super-competent, all-knowing AI of science fiction . 换言之,我们必须对与 人工智能共事做好准备, 这可不是科幻片里那些 全能全知的人工智能。
all-knowing:adj.(英)全知的; science fiction:科幻小说;
We have to prepared to work with an AI that's the one that we actually have in the present day . 我们必须准备好跟 眼下存在的人工智能共事。
present day:adj.现代的;当今的;现在的;现时的;
And present-day AI is plenty weird enough. 现在的人工智能还真的挺奇怪的。
present-day:adj.现代的;当今的;现在的;现时的;
Thank you. 谢谢。
(Applause) (掌声)