返回首页

StuartRussell_2017-_人工智能是如何让我们变得更好的_

This is Lee Sedol. 这是李世石。
Lee Sedol is one of the world's greatest Go players, and he's having what my friends in Silicon Valley call a "Holy Cow" moment — 李世石是全世界 最顶尖的围棋高手之一, 在这一刻,他所经历的 足以让我硅谷的朋友们 喊一句”我的天啊“——
Silicon Valley:n.硅谷(美国加利福尼亚州一处计算机和电子公司聚集地,有时用以指任何计算机公司聚集地);
(Laughter) (笑声)
a moment where we realize that AI is actually progressing a lot faster than we expected. 在这一刻,我们意识到 原来人工智能发展的进程 比我们预想的要快得多。
So humans have lost on the Go board. 人们在围棋棋盘上已经输了,
on the Go:adj.忙个不停的;活跃的;
What about the real world? 那在现实世界中又如何呢?
Well, the real world is much bigger, much more complicated than the Go board. 当然了,现实世界要 比围棋棋盘要大得多,复杂得多。
complicated:adj.复杂的;难懂的;v.使复杂化;(complicate的过去分词和过去式)
It's a lot less visible , but it's still a decision problem. 相比之下每一步也没那么明确, 但现实世界仍然是一个选择性问题。
visible:adj.明显的;看得见的;现有的;可得到的;n.可见物;进出口贸易中的有形项目;
And if we think about some of the technologies that are coming down the pike ... 如果我们想想那一些在不久的未来, 即将来临的新科技……
technologies:n.技术;科技(technology的复数); pike:n.通行费;矛;梭子鱼;尖头;收费道路;v.用矛刺穿;
Noriko [Arai] mentioned that reading is not yet happening in machines, at least with understanding. Noriko提到机器还不能进行阅读, 至少达不到理解的程度,
But that will happen, and when that happens, very soon afterwards, machines will have read everything that the human race has ever written. 但这迟早会发生, 而当它发生时, 不久之后, 机器就将读遍人类写下的所有东西。
human race:n.人类;
And that will enable machines, along with the ability to look further ahead than humans can, as we've already seen in Go, if they also have access to more information, they'll be able to make better decisions in the real world than we can. 这将使机器除了拥有 比人类看得更远的能力, 就像我们在围棋中看到的那样, 如果机器能接触到比人类更多的信息, 则将能够在现实世界中 做出比人类更好的选择。
enable:v.使能够;使有机会;使成为可能;使可行;
So is that a good thing? 那这是一件好事吗?
Well, I hope so. 我当然希望如此。
Our entire civilization , everything that we value, is based on our intelligence . 人类的全部文明, 我们所珍视的一切, 都是基于我们的智慧之上。
civilization:n.文明;文明社会;文明世界;(特定时期和地区的)社会文明; intelligence:n.智力;智慧;才智;(尤指关于敌国的)情报;
And if we had access to a lot more intelligence, then there's really no limit to what the human race can do. 如果我们能掌控更强大的智能, 那我们人类的 创造力 就真的没有极限了。
And I think this could be,as some people have described it, the biggest event in human history. 我认为这可能就像很多人描述的那样 会成为人类历史上最重要的事件。
described:v.描述;形容;把…称为;做…运动;(describe的过去分词和过去式)
So why are people saying things like this, that AI might spell the end of the human race? 那为什么有的人会说出以下的言论, 说人工智能将是人类的末日呢?
Is this a new thing? 这是一个新事物吗?
Is it just Elon Musk and Bill Gates and Stephen Hawking ? 这只关乎伊隆马斯克、 比尔盖茨,和斯提芬霍金吗?
Elon:n.埃伦(可溶性显影剂粉末); Musk:n.麝香;麝香鹿;麝香香味; Hawking:n.利用鹰行猎;v.袭击;翱翔;攫取(hawk的现在分词);
Actually, no. This idea has been around for a while . 其实不是的,人工智能 这个概念已经存在很长时间了。
for a while:adv.片刻;暂时;一会儿;一时;
Here's a quotation : "Even if we could keep the machines in a subservient position, for instance , by turning off the power at strategic moments" — and I'll come back to that "turning off the power" idea later on — "we should, as a species ,feel greatly humbled ." 请看这段话: “即便我们能够将机器 维持在一个屈服于我们的地位, 比如说,在战略性时刻将电源关闭。”—— 我等会儿再来讨论 ”关闭电源“这一话题, ”我们,作为一个物种,仍然应该自感惭愧。“
quotation:n.报价;引用;引文;引语; subservient:adj.屈从的;奉承的;有用的;有帮助的; instance:n.实例;情况;建议;v.举...为例; strategic:adj.战略上的,战略的; species:n.[生物]物种;种类; greatly:adv.很,大大地;非常; humbled:使...卑下;轻松打败(尤指强大的对手);贬低(humble的过去式和过去分词);
So who said this? 这段话是谁说的呢?
This is Alan Turing in 1951. 是阿兰图灵,他在1951年说的。
Turing:n.图灵机(一种可不受储存容量限制的假想计算机);
Alan Turing, as you know, is the father of computer science and in many ways, the father of AI as well. 阿兰图灵,众所皆知, 是计算机科学之父。 从很多意义上说, 他也是人工智能之父。
computer science:n.计算机科学;
So if we think about this problem, the problem of creating something more intelligent than your own species, we might call this "the gorilla problem," 当我们考虑这个问题, 创造一个比自己更智能的 物种的问题时, 我们不妨将它称为”大猩猩问题“,
intelligent:adj.有才智的;悟性强的;聪明的;有智力的 gorilla:大猩猩
because gorillas ' ancestors did this a few million years ago, and now we can ask the gorillas: 因为这正是大猩猩的 祖先们几百万年前所经历的。 我们今天可以去问大猩猩们:
gorillas:n.[脊椎]大猩猩;暴徒(gorilla的复数);
Was this a good idea? 那么做是不是一个好主意?
So here they are having a meeting to discuss whether it was a good idea, and after a little while,they conclude , no, this was a terrible idea. 在这幅图里,大猩猩们正在 开会讨论那么做是不是一个好主意, 片刻后他们下定结论,不是的。 那是一个很糟糕的主意。
conclude:v.断定:得出结论:终止:达成:缔结(协定)
Our species is in dire straits . 我们的物种已经奄奄一息了,
dire:adj.可怕的;悲惨的;极端的; straits:n.[海洋]海峡;困难(strait的复数);
In fact, you can see the existential sadness in their eyes. 你都可以从它们的眼神中看到这种忧伤,
existential:adj.存在主义的;有关存在的;存在判断的;
(Laughter) (笑声)
So this queasy feeling that making something smarter than your own species is maybe not a good idea — what can we do about that? 所以创造比你自己更聪明的物种, 也许不是一个好主意—— 那我们能做些什么呢?
queasy:adj.呕吐的;不稳定的;催吐的;
Well, really nothing, except stop doing AI, and because of all the benefits that I mentioned and because I'm an AI researcher, 其实没什么能做的, 除了停止研究人工智能, 但因为人工智能能带来,我之前所说的诸多益处, 也因为我是人工智能的研究者之一,
I'm not having that. 我可不同意就这么止步。
I actually want to be able to keep doing AI. 实际上,我想继续做人工智能。
So we actually need to nail down the problem a bit more. 所以我们需要把这个问题更细化一点,
nail down:确定,明确;用钉钉住;
What exactly is the problem? 它到底是什么呢?
Why is better AI possibly a catastrophe ? 那就是为什么更强大的人工智能可能会是灾难呢?
catastrophe:n.大灾难;大祸;惨败;
So here's another quotation: "We had better be quite sure that the purpose put into the machine is the purpose which we really desire." 再来看这段话: ”我们一定得确保我们 给机器输入的目的和价值 是我们确实想要的目的和价值。“
This was said by Norbert Wiener in 1960, shortly after he watched one of the very early learning systems learn to play checkers better than its creator. 这是诺博特维纳在1960年说的, 他说这话时是刚看到 一个早期的学习系统, 这个系统在学习如何能把 西洋棋下得比它的创造者更好。
Wiener:n.(美)维也纳香肠(等于frankfurter); checkers:n.收银员;检查程序;检验员;审核员;(checker的第三人称单数和复数)
But this could equally have been said by King Midas. 与此如出一辙的一句话, 迈达斯国王也说过。
King Midas said, "I want everything 迈达斯国王说:”我希望
I touch to turn to gold," 我触碰的所有东西都变成金子。“
and he got exactly what he asked for. 结果他真的获得了点石成金的能力。
That was the purpose that he put into the machine, so to speak , and then his food and his drink and his relatives turned to gold and he died in misery and starvation . 那就是他所输入的目的, 从一定程度上说, 后来他的食物、 他的家人都变成了金子, 他死在痛苦与饥饿之中。
so to speak:可以说;打个譬喻说; relatives:n.亲戚;亲属;同类事物;(relative的复数) misery:n.痛苦,悲惨;不幸;苦恼;穷困; starvation:n.饿死;挨饿;绝食;
So we'll call this"the King Midas problem" 我们可以把这个问题叫做”迈达斯问题“,
of stating an objective which is not, in fact, truly aligned with what we want. 这个问题是我们阐述的目标,但实际上 与我们真正想要的不一致,
objective:n.目标; adj.客观的; aligned:adj.对齐的;均衡的;v.结盟(align的过去式);使成一直线;
In modern terms, we call this "the value alignment problem." 用现代的术语来说, 我们把它称为”价值一致性问题“。
alignment:n.队列,成直线;校准;结盟;
Putting in the wrong objective is not the only part of the problem. 而输入错误的目标 仅仅是问题的一部分。
There's another part. 它还有另一部分。
If you put an objective into a machine, even something as simple as, "Fetch the coffee," 如果你为机器输入一个目标, 即便是一个很简单的目标, 比如说”去把咖啡端来“,
the machine says to itself, "Well, how might I fail to fetch the coffee? 机器会对自己说: ”好吧,那我要怎么去拿咖啡呢?
Someone might switch me off. 说不定有人会把我的电源关掉。
OK, I have to take steps to prevent that. 好吧,那我要想办法,阻止别人把我关掉。
take steps:采取措施;采取步骤;
I will disable my 'off' switch. 我得让我的‘关闭’开关失效。
disable:vt.使失去能力;使残废;使无资格;
I will do anything to defend myself against interference with this objective that I have been given." 我得尽一切可能自我防御, 不让别人干涉我, 这都是因为我被赋予的目标。”
interference:n.干扰,冲突;干涉;
So this single-minded pursuit in a very defensive mode of an objective that is, in fact, not aligned with the true objectives that's the problem that we face. 这种一根筋的思维, 以一种十分防御型的 模式去实现某一目标, 实际上与我们人类最初 这就是我们面临的问题。
single-minded:adj.专心的;纯真的;真诚的;率直的; pursuit:n.追求;追赶;追捕;跟踪; defensive:n.辩护;守势;adj.防御的;保护的;保卫的;戒备的; objectives:n.目的(objective的复数形式);目标;宗旨;
And in fact, that's the high-value takeaway from this talk. 实际上,这就是今天这个演讲的核心。
high-value:n.高位值;
If you want to remember one thing, it's that you can't fetch the coffee if you're dead. 如果你在我的演讲中只记住一件事, 那就是:如果你死了, 你就不能去端咖啡了。
(Laughter) (笑声)
It's very simple. Just remember that. 这很简单。记住它就行了。
Repeat it to yourself three times a day. 每天对自己重复三遍。
(Laughter) (笑声)
And in fact, this is exactly the plot of "2001: [A Space Odyssey]" 实际上,这正是电影 《2001太空漫步》的剧情。
plot:n.情节;阴谋;布局;小块土地;v.密谋;暗中策划;(在地图上)标出;绘制(图表);
HAL has an objective, a mission , which is not aligned with the objectives of the humans, and that leads to this conflict . HAL有一个目标,一个任务, 但这个目标和人类的目标不一致, 这就导致了矛盾的产生。
mission:n.使命,任务;代表团;布道;v.派遣;向…传教; conflict:n.冲突;矛盾;争执;抵触;v.抵触;
Now fortunately , HAL is not superintelligent . 幸运的是,HAL并不具备超级智能,
fortunately:adv.幸运地; superintelligent:adj.有超常智慧的;
He's pretty smart, but eventually Dave outwits him and manages to switch him off. 他挺聪明的,但还是 比不过人类主角戴夫, 戴夫成功地把HAL关掉了。
eventually:adv.最后,终于; outwits:v.哄骗;瞒住;给…上当;机智上胜过;(outwits是outwit的第三人称单数);
But we might not be so lucky. 但我们可能就没有这么幸运了。
So what are we going to do? 那我们应该怎么办呢?
I'm trying to redefine AI to get away from this classical notion of machines that intelligently pursue objectives. 我想要重新定义人工智能,远离传统的定义, 将其仅限定为机器通过智能去达成目标。
redefine:vt.重新定义; classical:adj.古典的;经典的;传统的;第一流的;n.古典音乐; notion:n.观念;信念;理解; intelligently:adv.聪明地,明智地; pursue:v.继续;从事;追赶;纠缠;
There are three principles involved . 新的定义涉及到三个原则:
principles:n.原则;主义;本质;政策;(principle的复数) involved:adj.有关的; v.涉及; (involve的过去式和过去分词)
The first one is a principle of altruism , if you like, that the robot's only objective is to maximize the realization of human objectives,of human values. 第一个原则是利他主义原则, 也就是说,机器的唯一目标 就是去最大化地实现人类的目标,人类的价值。
altruism:n.利他;利他主义; maximize:vt.取…最大值;对…极为重视;vi.尽可能广义地解释;达到最大值; realization:n.实现;领悟;
And by values here I don't mean touchy-feely , goody-goody values. 至于价值,我指的不是感情化的价值,
touchy-feely:adj.过于情感化的;过于卿卿我我的; goody-goody:adj.伪善的;假道学的;n.伪善的人;道学先生;
I just mean whatever it is that the human would prefer their life to be like. 而是指人类对生活所向往的, 无论是什么。
prefer:v.更喜欢;宁愿;提出;提升;
And so this actually violates Asimov's law that the robot has to protect its own existence. 这实际上违背了阿西莫夫定律, 他指出机器人一定要维护自己的生存。
violates:违反;亵渎;侵犯;
It has no interest in preserving its existence whatsoever . 但我定义的机器 对维护自身生存毫无兴趣。
preserving:n.保留,保存; whatsoever:pron.无论什么;
The second law is a law of humility , if you like. 第二个原则不妨称之为谦逊原则。
humility:n.谦卑,谦逊;
And this turns out to be really important to make robots safe. 这一条对于制造安全的机器十分重要。
It says that the robot does not know what those human values are, so it has to maximize them, but it doesn't know what they are. 它说的是机器不知道 人类的价值是什么, 机器知道它需要将人类的价值最大化, 却不知道这价值究竟是什么。
And that avoids this problem of single-minded pursuit of an objective. 为了避免一根筋地追求 某一目标,
This uncertainty turns out to be crucial . 这种不确定性是至关重要的。
uncertainty:n.不确定,不可靠; crucial:adj.重要的;决定性的;定局的;决断的;
Now, in order to be useful to us, it has to have some idea of what we want. 那机器为了对我们有用, 它就得掌握一些关于我们想要什么的信息。
It obtains that information primarily by observation of human choices, so our own choices reveal information about what it is that we prefer our lives to be like. 它主要通过观察人类 做的选择来获取这样的信息, 我们自己做出的选择会包含着 关于我们希望我们的生活 是什么样的信息,
obtains:获得; primarily:adv.首先;主要地,根本上; observation:n.观察;观测;监视;(尤指据所见、所闻、所读而作的)评论; reveal:v.显示;透露;揭露;泄露;n.揭露;暴露;门侧,窗侧;
So those are the three principles. 这就是三条原则。
Let's see how that applies to this question of: "Can you switch the machine off?"as Turing suggested. 让我们来看看它们是如何应用到 像图灵说的那样, “将机器关掉”这个问题上来。
applies:v.适用;申请;运用;专心;(apply的第三人称单数)
So here's a PR2 robot. 这是一个PR2机器人。
This is one that we have in our lab, and it has a big red "off" switch right on the back. 我们实验室里有一个。 它的背面有一个大大的红色的开关。
The question is: Is it going to let you switch it off? 那问题来了:它会让你把它关掉吗?
If we do it the classical way, we give it the objective of, "Fetch the coffee, I must fetch the coffee, 如果我们按传统的方法, 给它一个目标,让它拿咖啡, 它会想:”我必须去拿咖啡,
I can't fetch the coffee if I'm dead," 但我死了就不能拿咖啡了。“
so obviously the PR2 has been listening to my talk, and so it says, therefore,"I must disable my 'off' switch, and probably taser all the other people in Starbucks who might interfere with me." 显然PR2听过我的演讲了, 所以它说:”我必须让我的开关失灵, 可能还要把那些在星巴克里, 可能干扰我的人都电击一下。“
taser:n.泰瑟枪(一种武器); Starbucks:n.星巴克(咖啡店名);
(Laughter) (笑声)
So this seems to be inevitable , right? 这看起来必然会发生,对吗?
inevitable:adj.必然的,不可避免的;
This kind of failure mode seems to be inevitable, and it follows from having a concrete , definite objective. 这种失败看起来是必然的, 因为机器人在遵循 一个十分确定的目标。
concrete:n.混凝土;adj.混凝土制的;确实的,具体的;vt.用混凝土覆盖 definite:adj.一定的;确切的;
So what happens if the machine is uncertain about the objective? 那如果机器对目标 不那么确定会发生什么呢?
Well, it reasons in a different way. 那它的思路就不一样了。
It says, "OK, the human might switch me off, but only if I'm doing something wrong. 它会说:”好的,人类可能会把我关掉, 但只在我做错事的时候。
Well, I don't really know what wrong is, but I know that I don't want to do it." 我不知道什么是错事, 但我知道我不该做那些事。”
So that's the first and second principles right there. 这就是第一和第二原则。
'"So I should let the human switch me off." “那我就应该让人类把我关掉。”
And in fact you can calculate the incentive that the robot has to allow the human to switch it off, and it's directly tied to the degree of uncertainty about the underlying objective. 事实上你可以计算出机器人 让人类把它关掉的动机, 而且这个动机是 与对目标的不确定程度直接相关的。
incentive:n.动机;刺激;adj.激励的;刺激的; directly:adv.直接地;立即;马上;正好地;坦率地;conj.一…就; underlying:adj.根本的; v.构成…的基础; (underlie的现在分词)
And then when the machine is switched off, that third principle comes into play. 当机器被关闭后, 第三条原则就起作用了。
It learns something about the objectives it should be pursuing , because it learns that what it did wasn't right. 机器开始学习它所追求的目标, 因为它知道它刚做的事是不对的。
pursuing:v.追求;致力于;贯彻;跟踪;追赶;(pursue的现在分词)
In fact, we can, with suitable use of Greek symbols , as mathematicians usually do, we can actually prove a theorem that says that such a robot is provably beneficial to the human. 实际上,我们可以用希腊字母 就像数学家们经常做的那样, 直接证明这一定理, 那就是这样的一个机器人 对人们是绝对有利的。
suitable:adj.合适的;适宜的;适当的;适用的; symbols:n.符号;象征;标志;符号表(symbol的复数); mathematicians:n.[数]数学家(mathematician的复数形式); theorem:n.[数]定理;原理; provably:adv.证明地;可查验地;试验得出地; beneficial:adj.有益的,有利的;可享利益的;
You are provably better off with a machine that's designed in this way than without it. 可以证明我们的生活 有如此设计的机器人会变得 比没有这样的机器人更好。
So this is a very simple example, but this is the first step in what we're trying to do with human-compatible AI. 这是一个很简单的例子,但这只是 我们尝试实现与人类 兼容的人工智能的第一步。
Now, this third principle, 现在来看第三个原则。
I think is the one that you're probably scratching your head over. 我知道你们可能正在 为这一个原则而大伤脑筋。
scratching:v.划伤;擦伤;刮痕;(scratch的现在分词)
You're probably thinking, "Well, you know, I behave badly. 你可能会想:“你知道, 我有时不按规矩办事。
behave:v.表现;(机器等)运转;举止端正;(事物)起某种作用;
I don't want my robot to behave like me. 我可不希望我的机器人像我一样行事。
I sneak down in the middle of the night and take stuff from the fridge. 我有时大半夜偷偷摸摸地 从冰箱里找东西吃,
sneak:v.溜; n.鬼鬼祟祟的人; adj.暗中进行的; stuff:n.东西:物品:基本特征:v.填满:装满:标本:
I do this and that." 诸如此类的事。”
There's all kinds of things you don't want the robot doing. 有各种各样的事你是 不希望机器人去做的。
But in fact, it doesn't quite work that way. 但实际上并不一定会这样。
Just because you behave badly doesn't mean the robot is going to copy your behavior. 仅仅是因为你表现不好, 并不代表机器人就会复制你的行为。
It's going to understand your motivations and maybe help you resist them, if appropriate . 它会去尝试理解你做事的动机, 而且可能会在合适的情况下制止你去做 那些不该做的事。
motivations:n.动机(motivation的复数);表明动机; resist:v.抵制;阻挡;反抗;回击;抵抗;忍住;n.防染剂;防蚀用涂料;防腐剂; appropriate:adj.适当的;恰当的;v.占用,拨出;
But it's still difficult. 但这仍然十分困难。
What we're trying to do, in fact, is to allow machines to predict for any person and for any possible life that they could live, and the lives of everybody else: 实际上,我们在做的是 让机器去预测任何一个人, 在他们的任何一种 可能的生活中 以及别人的生活中,
predict:v.预报;预言;预告;
Which would they prefer? 他们会更倾向于哪一种?
And there are many, many difficulties involved in doing this; 这涉及到诸多困难;
I don't expect that this is going to get solved very quickly. 我不认为这会很快地就被解决。
The real difficulties, in fact, are us. 实际上,真正的困难是我们自己。
As I have already mentioned, we behave badly. 就像我刚说的那样, 我们做事不守规矩,
In fact, some of us are downright nasty . 我们中有的人甚至行为肮脏。
downright:adj.明白的;直率的;显明的;adv.完全,彻底;全然; nasty:adj.极差的:令人厌恶的:不友好的:n.令人不愉快的事物:
Now the robot, as I said , doesn't have to copy the behavior. 就像我说的, 机器人并不会复制那些行为,
as I said:正如我所说的
The robot does not have any objective of its own. 机器人没有自己的目标, 它是完全无私的。
It's purely altruistic . 它的设计不是去满足
purely:adv.完全;仅仅; altruistic:adj.利他的;无私心的;
And it's not designed just to satisfy the desires of one person, the user, but in fact it has to respect the preferences of everybody. 某一个人、一个用户的欲望, 而是去尊重所有人的意愿。
satisfy:vt.满足(要求,需要等):使满意:使确信: preferences:n.偏爱;爱好;喜爱;偏爱的事物;(preference的复数)
So it can deal with a certain amount of nastiness , and it can even understand that your nastiness , for example, you may take bribes as a passport official because you need to feed your family and send your kids to school. 所以它能对付一定程度的肮脏行为。 它甚至能理解你的不端行为,比如说 假如你是一个边境护照官员,很可能收取贿赂, 因为你得养家、 得供你的孩子们上学。
nastiness:n.不洁,污秽; bribes:n.贿赂;v.向(某人)行贿;贿赂(bribe的第三人称单数和复数) official:adj.官方的;正式的;公务的;n.官员;公务员;高级职员;
It can understand that; it doesn't mean it's going to steal. 机器人能理解这一点, 它不会因此去偷,
In fact, it'll just help you send your kids to school. 它反而会帮助你去供孩子们上学。
We are also computationally limited . 我们的计算能力也是有限的。
computationally:adv.computational(adj.计算的;计算机的)的变形; limited:adj.有限的; n.高级快车; v.限制; (limit的过去分词和过去式)
Lee Sedol is a brilliant Go player, but he still lost. 李世石是一个杰出的围棋大师, 但他还是输了。
So if we look at his actions, he took an action that lost the game. 如果我们看他的行动, 他最终输掉了棋局。
That doesn't mean he wanted to lose. 但这不意味着他想要输。
So to understand his behavior, we actually have to invert through a model of human cognition that includes our computational limitations — a very complicated model. 所以要理解他的行为, 我们得从人类认知模型来反过来想, 这包含了我们的计算能力限制, 是一个很复杂的模型,
invert:vt.使…转化; n.颠倒的事物; adj.转化的; cognition:n.认识;知识;认识能力; limitations:n.局限性;(限制)因素;边界(limitation的复数形式);
But it's still something that we can work on understanding. 但仍然是我们可以尝试去理解的。
Probably the most difficult part, from my point of view as an AI researcher, is the fact that there are lots of us, and so the machine has to somehow trade off, weigh up the preferences of many different people,and there are different ways to do that. 可能对于我这样一个 人工智能研究人员来说最大的困难, 是我们彼此各不相同。 所以机器必须想办法去判别衡量 不同人的不同需求, 而又有众多方法去做这样的判断。
point of view:观点;见地;立场; somehow:adv.以某种方法;莫名其妙地;
Economists, sociologists , moral philosophers have understood that, and we are actively looking for collaboration . 经济学家、社会学家、 哲学家都理解这一点, 我们正在积极地去寻求合作。
sociologists:n.社会学家; moral:n.寓意;品行;教益;adj.道德的;道义上的;道德上的;品行端正的; philosophers:n.哲学家(philosopher的复数); collaboration:n.合作;勾结;通敌;
Let's have a look and see what happens when you get that wrong. 让我们来看看如果我们 把这一步弄错了会怎么样。
So you can have a conversation, for example, with your intelligent personal assistant that might be available in a few years' time. 举例来说,你可能会与你的人工智能助理, 有这样的对话: 这样的人工智能可能几年内就会出现,
personal assistant:n.私人助理;私人秘书;
Think of a Siri on steroids . 可以把它想做加强版的Siri。
Siri:n.iPhone4S上的语音控制功能; steroids:n.甾族化合物;类固醇;(steroid的复数)
So Siri says, "Your wife called to remind you about dinner tonight." Siri对你说:“你的妻子打电话 提醒你今晚要跟她共进晚餐。”
remind:v.提醒;使想起;
And of course, you've forgotten. 而你呢,自然忘了这回事:
'"What? What dinner? “什么?什么晚饭?
What are you talking about?" 你在说什么?”
'"Uh, your 20th anniversary at 7pm." “啊,你们晚上7点,庆祝结婚20周年纪念日。”
'"I can't do that. I'm meeting with the secretary-general at 7:30. “我可去不了。 我约了晚上7点半见领导。
secretary-general:n.秘书长;总书记;
How could this have happened?" 怎么会这样呢?”
'"Well, I did warn you, but you overrode my recommendation ." “呃,我可是提醒过你的, 但你不听我的建议。”
overrode:vt.践踏;超过; recommendation:n.推荐;介绍;提议;正式建议;
'"Well, what am I going to do? “我该怎么办呢?我可不能
I can't just tell him I'm too busy." 跟领导说我有事,没空见他。”
'"Don't worry. I arranged for his plane to be delayed." “别担心。我已经安排了, 让他的航班延误。
arranged:adj.安排的;v.安排;计划;准备(arrange的过去式和过去分词);
(Laughter) (笑声)
'"Some kind of computer malfunction ." “像是因为某种计算机故障那样。”
malfunction:n.失灵;故障,功能障碍;
(Laughter) (笑声)
'"Really? You can do that?" “真的吗?这个你也能做到?”
'"He sends his profound apologies and looks forward to meeting you for lunch tomorrow." “领导很不好意思,跟你道歉, 并且告诉你明天 中午午饭不见不散。”
profound:adj.深厚的;意义深远的;渊博的;
(Laughter) (笑声)
So the values here —there's a slight mistake going on. 这里就有一个小小的问题。
slight:adj.轻微的;略微的;细小的;纤细的;n.侮慢;冷落;轻视;v.侮慢;冷落;轻视;
This is clearly following my wife's values which is "Happy wife, happy life." 这显然是在遵循我妻子的价值论, 那就是“老婆开心,生活舒心”。
(Laughter) (笑声)
It could go the other way. 它也有可能发展成另一种情况。
You could come home after a hard day's work, and the computer says, "Long day?" 你忙碌一天,回到家里, 电脑对你说:“像是繁忙的一天啊?”
'"Yes, I didn't even have time for lunch." “是啊,我连午饭都没来得及吃。”
'"You must be very hungry." “那你一定很饿了吧。”
'"Starving, yeah. “快饿晕了。
Could you make some dinner?" 你能做点晚饭吗?”
'"There's something I need to tell you." “有一件事我得告诉你。
(Laughter) (笑声)
'"There are humans in South Sudan who are in more urgent need than you." ”南苏丹的人们可比你更需要照顾。
urgent:adj.紧急的;急迫的;
(Laughter) (笑声)
'"So I'm leaving. Make your own dinner." “所以我要离开了。你自己做饭去吧。”
(Laughter) (笑声)
So we have to solve these problems, and I'm looking forward to working on them. 我们得解决这些问题, 我也很期待去解决。
There are reasons for optimism . 我们有理由感到乐观。
optimism:n.乐观;乐观主义;
One reason is, there is a massive amount of data. 理由之一是 我们有大量的数据,
massive:adj.大量的;巨大的,厚重的;魁伟的;
Because remember — I said they're going to read everything the human race has ever written. 记住,我说过机器将能够阅读一切 人类所写下来的东西,
Most of what we write about is human beings doing things and other people getting upset about it. 而我们写下的大多数是 我们做的什么事情, 以及其他人对此有什么意见。
upset:adj.沮丧; v.打乱; n.苦恼; (意外的)混乱;
So there's a massive amount of data to learn from. 所以机器可以从大量的数据中去学习。
There's also a very strong economic incentive to get this right. 同时从经济的角度, 我们也有足够的动机 去把这件事做对。
economic:adj.经济的,经济上的;经济学的;
So imagine your domestic robot's at home. 想象一下,你家里有个居家机器人,
domestic:n.佣人;家佣;家庭纠纷;家庭矛盾;adj.本国的;国内的;家用的;家庭的;
You're late from work again and the robot has to feed the kids, and the kids are hungry and there's nothing in the fridge. 而你又得加班, 机器人得给孩子们做饭, 孩子们很饿, 但冰箱里什么都没有。
And the robot sees the cat. 然后机器人看到了家里的猫,
(Laughter) (笑声)
And the robot hasn't quite learned the human value function properly, so it doesn't understand the sentimental value of the cat outweighs the nutritional value of the cat. 机器人还没学透人类的价值论, 所以它不知道 猫的感情价值 大于猫的营养价值。
sentimental:adj.伤感的;多愁善感的;感情用事的;寓有情感的; outweighs:vt.比…重(在重量上);比…重要;比…有价值; nutritional:adj.营养的;滋养的;
(Laughter) (笑声)
So then what happens? 接下来会发生什么?
Well, it happens like this: " Deranged robot cooks kitty for family dinner." 差不多是这样的: 头版头条:“疯狂的机器人 把猫煮了给主人当晚饭!”
Deranged:adj.疯狂的;精神错乱的;v.精神错乱;(derange的过去分词和过去式)
That one incident would be the end of the domestic robot industry. 这一个事故就足以结束 整个居家机器人产业。
So there's a huge incentive to get this right long before we reach superintelligent machines. 所以我们有足够的动机在我们实现 超级智能机器让它更加完善。
So to summarize : 总结来说:
summarize:v.总结;概述;概括;归纳;
I'm actually trying to change the definition of AI so that we have provably beneficial machines. 我想要改变人工智能的定义, 让我们可以证明机器对我们是有利的。
definition:n.定义;清晰度;(尤指词典里的词或短语的)释义;解释;
And the principles are: machines that are altruistic, that want to achieve only our objectives, but that are uncertain about what those objectives are, and will watch all of us to learn more about what it is that we really want. 这三个原则是: 机器是利他的, 只想着实现我们的目标, 但它不确定我们的目标是什么, 所以它会观察我们, 从中学习我们想要的究竟是什么。
And hopefully in the process , we will learn to be better people. 希望在这个过程中, 我们也能学会成为更好的人。
process:v.处理;加工;列队行进;n.过程,进行;方法,adj.经过特殊加工(或处理)的;
Thank you very much. 谢谢大家。
(Applause) (掌声)
Chris Anderson: So interesting, Stuart . 克里斯安德森:非常有意思,斯图尔特。
Stuart:斯图亚特王室的;
We're going to stand here a bit because I think they're setting up for our next speaker. 我们趁着工作人员 为下一位演讲者布置的时候 来简单聊几句。
A couple of questions. 我有几个问题。
So the idea of programming in ignorance seems intuitively really powerful. 从直觉上来看,将无知编入到程序中 似乎是一个很重要的理念,
ignorance:n.无知,愚昧;不知,不懂; intuitively:adv.直观地;直觉地;
As you get to superintelligence, what's going to stop a robot reading literature and discovering this idea that knowledge is actually better than ignorance and still just shifting its own goals and rewriting that programming? 当你要实现超级智能时, 什么能阻止机器人? 当它在阅读和学习的过程中发现, 知识比无知更强大, 然后就改变它的目标 去重新编写程序呢?
literature:n.文学;文献;文艺;著作; shifting:adj.不断移动的;流动的;v.转移;赶快;快速移动;变换;(shift的现在分词)
Stuart Russell: Yes, so we want it to learn more, as I said, about our objectives. 斯图尔特拉塞尔:是的, 我们想要它去学习,就像我说的, 学习我们的目标。
It'll only become more certain as it becomes more correct, so the evidence is there and it's going to be designed to interpret it correctly. 它只有在理解得越来越正确的时候, 才会变得更确定, 我们有证据显示, 它的设计使它能按正确的方式理解。
evidence:n.证据,证明;迹象;明显;v.证明; interpret:v.诠释;说明;口译;把…理解为;
It will understand, for example,that books are very biased in the evidence they contain. 比如说,它能够理解书中的论证是 带有非常强的偏见的。
biased:adj.有偏见的;结果偏倚的,有偏的;
They only talk about kings and princes and elite white male people doing stuff. 书中只会讲述国王、王子 和那些精英白人男性做的事。
elite:n.精英;精华;杰出人物;
So it's a complicated problem, but as it learns more about our objectives it will become more and more useful to us. 这是一个复杂的问题, 但当它更深入地学习我们的目标时, 它就变得对我们更有用。
CA: And you couldn't just boil it down to one law, you know, hardwired in: "if any human ever tries to switch me off, CA:那你不能把这些 都集中在一条准则里吗? 把这样的命令写在它的程序里: “如果人类什么时候想把我关掉,
hardwired:adj.电路的;
I comply . I comply ." 我服从。我服从。”
comply:vi.遵守;顺从,遵从;答应;
SR: Absolutely not . SR:绝对不行,
Absolutely not:绝对不会;绝对不是;绝对不行;
That would be a terrible idea. 那将是一个很糟糕的主意。
So imagine that you have a self-driving car and you want to send your five-year-old off to preschool . 试想一下,你有一辆无人驾驶汽车, 你想让它送你五岁的孩子 去上学。
self-driving:自驾; preschool:adj.未满学龄的;就学前的,入学前的;n.幼儿园,育幼院;
Do you want your five-year-old to be able to switch off the car while it's driving along? 你希望你五岁的孩子 能在汽车运行过程中 将它关闭吗?
Probably not. 应该不会吧。
So it needs to understand how rational and sensible the person is. 它得理解下指令的人有多理智, 是不是讲道理。
rational:n.理性;人类;合理的事物;[数]有理数;adj.合理的;理性的;明智的;理智的; sensible:adj.明智的; n.可感觉到的东西;
The more rational the person, the more willing you are to be switched off. 这个人越理智, 它就越愿意自己被关掉。
If the person is completely random or even malicious , then you're less willing to be switched off. 如果这个人是完全思绪混乱 或者甚至是有恶意的, 那你就不愿意它被关掉。
random:adj.[数]随机的;任意的;胡乱的;n.随意;adv.胡乱地; malicious:adj.恶意的;恶毒的;蓄意的;怀恨的;
CA: All right. Stuart, can I just say, CA:好吧。斯图尔特,我得说
I really, really hope you figure this out for us. 我真的希望你为我们 能把这一切研究出来,
Thank you so much for that talk.That was amazing. 很感谢你的演讲,太精彩了。
SR: Thank you. SR:谢谢。
(Applause) (掌声)