The Challenges of AI 人工智能的挑战
The Challenges of AI
Technological advancement has always been a defining factor of human civilization. In the modern world, technology has undergone huge leaps. In the past, technological inventions did not surpass or attempt to break the laws of nature. With the advent of genetic technologies, scientists now can shape embryos, clone organisms, and alter the genetic code itself. Since the beginning of the twenty-first century, artificial intelligence has gradually been brought into the fore of technological discussions. Artificial intelligence has a potential to change the fundamental ways of organizing societies. However, AI may be a pandora’s box that eventually destroys the foundations on which social structures lie. This essay will examine the potential challenges and dangers that lay ahead and provide insight into the possible solutions that can be implemented.
AI first has the capability to break the basic trust between people when they communicate, which delivers profound implications to the powers of the nation-state. Based on the current speed of development, in the near future, people will likely see AI develop the ability to “live” and interact with human users on social media and in the general web. Currently, there are already AI extensions that one can add to an internet browser. These extensions can read and dissect information of a webpage or video, and can even act like human interpreters or surfers on the internet. If AI bots and other forms of this technology become permanently settled on social media platforms, it essentially means that there will be a mix of real and fake humans coexisting on these platforms. Thus, when one sends messages or attempts to converse with many others globally, it could be quite difficult for them to discern “who is real, who is fake, what messages are sent from real users, what messages are sent from bots.” If the world allows this situation to happen, then AI can threaten to break the most fundamental trust between individuals, the trust that people are sharing a discourse with other real people.
To analyze the impacts of such a break in trust, consider first the financial system as an analogous case. There is a reason that most countries strictly ban counterfeit money, although technologies to produce it exists. Money itself does not possess any value – the people in an economy who collectively use and trust it bestows it with an imaginary value. Hence, if people are well-aware that counterfeit money coexists with real currency in the financial system, a crisis might happen as they lose trust in the money they hold in their hands. Counterfeit money exactly resembles real money, but it does not have the buying powers of the latter. Hence, when counterfeit money exists in abundance, people must constantly cope with the fear that the bank notes they have might be valueless. Their faith in the financial system and the currency itself can dwindle, leading to a financial crisis.
In the case of AI, a similar scenario might occur. If AI bots that resemble real humans flood the internet, then people can lose trust in the act of communicating online. Few intrinsically trust or prefer a talk with AI bots over real humans. There will be widespread fear and uncertainty in conversations. Today, especially in Western democracies, reliable, trustworthy conversation is an indispensable factor for the governments to run properly. AI has the potential to break the bridges of communication and shatter the cornerstones for a functioning government. Even without AI, democracies are now experiencing a lack of trust among the public. There was dispute on who won the last U.S. presidential election even after Mr. Biden was sworn into office. But from a fundamental viewpoint, should it not be that every participating member of a democracy have unanimous trust in the result of an election? If AI further undermines the effectiveness of communication, trust in each other and the political process may continue to fall, leading to disastrous consequences.
As AI may further erode the power of the nation-state and the political system by reducing trust in communication, the massive proliferation of AI poses another issue, forming a dual challenge. The latest models of AI are being developed by private companies and the open source. In the near future, AI tools and software might be available to every individual and social organization who have access to a computer and internet. Like nuclear weapons, once more people learn the mechanisms and techniques of developing and utilizing AI, this technology will proliferate quickly on a global scale. When AI becomes ingrained into every facet of society, when the majority acquires the skills and awareness to utilize AI to their personal needs, regulating this technology looms as a significant challenge. As some societies grow more polarized and divided, how can leaders ensure that different groups do not use AI to deliver damage to other groups? This question is just a tip of the iceberg. AI will be one of the most powerful and versatile technologies that are available on a massive scale. This characteristic poses a great difficulty for governments and other sectors to establish guardrails, as it is impossible to watch over every written or used AI software.
Human technologies display a pattern whereby as time passes, they become more delicate, efficient, and accessible for use. AI is no exception. Just a few years ago, people regarded AI as a tool only employed by tech companies and computer whizzes. Now, with the birth of ChatGPT and other open-source software, everyone learns almost instantly how to integrate AI into daily studying, work, and leisure. In the future, AI might evolve even further, becoming more convenient and versatile. This trend can be concerning. Within three to five years, AI will be far more capable than the human brain when processing titanic amounts of information and data. If engineers continue to train the creativity of the language models, then it is possible that some day, AI will develop systems and plans that few humans can thoroughly understand or process. People might be unable to regulate or keep a vigilant eye on these systems and plans, which often leads to disaster. The financial crash from 2007 to 2009 occurred partially because Wall Street math geniuses developed financial technologies that seemed so powerful that few could envision its potential risks and limitations. AI, in the future, can cause similar catastrophes with the same formula.
Faced with these challenges, the world must come together to find solutions. First, it is crucial to establish guardrails and currently take a precautionary approach. Precautions mean that AI developers must forgo some pieces or branches of the technology that may possess huge unpredictability and danger. For instance, developers should not train AI so that they can be autonomous or possess recursive self-improvement abilities. There will be tradeoffs. In hindsight, people might lament that there are many missed opportunities because developers choose not to develop certain capabilities. But currently, AI development is accelerating at such a high pace that the world is still learning to weigh the benefits and costs of its decisions. It is rash and dangerous to unleash the potential power of AI development when leaders have not yet established proper guardrails.
Second, the leading developers of AI should share their findings and practices, even though these developers might possess a competitive relationship in the market. If one company finds a vulnerability in a model that can be addressed with a certain method, then it should share this finding with others so that mutual risks can be avoided. With such a powerful and unpredictable technology, widespread collaboration and communication should exist. Today, countries communicate and warn each other when global economic issues arise. The same model should be applied to AI. Everyone should be informed of the strengths and weaknesses of a model. A company with successful experience should encourage other companies to adopt similar principles and practices. To counter the challenges requires a collective effort.
Third, in the Western world, it should be made explicitly clear that AI must not be used for electioneering for now. Some people do exaggerate the impacts that AI can deliver to elections. For example, the notion that AI will make disinformation significantly more alarming is only true to an extent, as disinformation already exists as a major problem in democracies prior to the age of AI. However, since AI does have the potential to shatter the cornerstones of a democratic nation-state, being careful is the wisest option. Governments cannot trust these models yet in their ability to navigate complex politics and stay unaffected by biased comments on the web. Currently, many AI software still provide racist, sexist, and other extreme claims, which are highly detrimental to political and social stability.
Finally, on the macro international level, containing the development of AI is a daunting and arduous endeavor. Countries are divided among themselves, making it hard to collaborate efficiently. What happens when one country wishes to rule out some actions, but others disagree? Hence, the world must establish a common set of values and a coalition of the willing. Countries must realize that it is still in the best collective interest to create these values and hold them as universal even if some countries might not adhere to them. There are currently few better options. It is also vital to develop new institutions that can understand and react to the rapid developments of this technology as governments today seem quite clueless. The world must have an external entity on this matter that will wield the human, economic, and technological resources to monitor AI. This entity must gain the public’s trust. Without trust, these independent institutions will have no real power. Without these institutions, controlling AI shall become riskier and more arduous than ever.
《人工智能时代的根本挑战》
技术进步一直是定义人类文明的核心因素之一。在现代世界,众多科技已经历了巨大的飞跃。过去,技术发明很难超越或试图打破自然规律。随着基因技术的出现,科学家现在可以塑造胚胎、克隆生物,并改变基因序列。进入二十一世纪,人工智能逐渐成为科技讨论的焦点。人工智能有可能改变社会的基本组织方式。然而,人工智能完全可以成为潘多拉魔盒,最终摧毁社会结构赖以存在的基础。本文将探讨人工智能时代摆在世界面前的潜在挑战与危险,并深入探讨可以实施的解决方案。
人工智能首先有能力打破人与人交流时的基本信任,这对国家的基本力量产生深远影响。根据目前的发展速度,在不久的将来,人们很可能会看到人工智能发展出在社交媒体与普通网络上与人类用户 "生活 "并互动的能力。目前,人们已经可以将人工智能扩展程序添加到互联网浏览器中,这些扩展程序可以读取并剖析网页或视频的信息,甚至可以在互联网上扮演人类网络冲浪者的角色。如果人工智能机器人与这种技术的其他形式在社交媒体平台上长期存在,那么从根本上说,这就意味着这些平台上将出现真假人类共存的情况。因此,当一个人在全球范围内发送信息或试图与许多人交流时,他们可能很难辨别 "谁是真的,谁是假的人工智能,哪些信息是真实用户发送的,哪些信息是机器人发送的"。如果世界允许这种情况发生,那么人工智能就有可能打破人与人之间最基本的信任,即人们相信自己的沟通对象同样也是一个有血有肉,有真实感情的人。
要分析这种信任破裂的影响,首先可以将金融系统作为一个类比案例。尽管存在制造假币的技术,但大多数国家严格禁止使用假币。货币本身并不具有任何价值--一个经济体中共同使用并信任货币的人们赋予了货币一种想象的价值。因此,如果人们清楚地意识到金融体系中假币与真币并存,就可能会对手中的货币失去信任,从而引发危机。假币与真币十分相似,但却没有真币的购买力。因此,当假币大量存在时,人们必须时刻担心自己手中的钞票会变得毫无价值。他们对金融体系与货币本身的信心可能会下降,从而导致金融危机。
人工智能也可能出现类似的情况。如果酷似真人的人工智能机器人充斥互联网,那么人们就会失去对在线交流行为的信任。与真人相比,很少有人从本质上信任或喜欢与人工智能机器人交谈。对话中将普遍存在恐惧与不确定性,人们将不再被鼓励去进行积极频繁的对话。如今,特别是在西方民主国家,可靠、可信的对话是政府正常运转不可或缺的因素。人工智能有可能打破沟通的桥梁,摧毁政府正常运转的基石。即使没有人工智能,民主国家现在也正经历着公众信任缺失的问题。即使在拜登总统宣誓就职后,关于上届美国总统大选究竟谁是获胜者仍存在争议。但从根本上说,民主国家的每一个参与成员难道不应该对选举结果有一致的信任吗?如果人工智能进一步削弱了沟通的有效性,那么对彼此以及对依赖于彼此的政体的信任可能会继续下降,从而导致灾难性的后果。
人工智能可能会降低人们对本质交流的信任,从而进一步削弱民族国家和政治制度的力量。人工智能的大规模扩散是另一个问题,形成了双重挑战。最新的人工智能模型是由私营公司和开源软件开发的。在不久的将来,每一个拥有电脑于互联网的个人及社会组织都可能获得人工智能工具或软件。就像核武器一样,一旦人学会了开发并利用人工智能的机制与技术,这项技术就会在全球范围内迅速扩散。当人工智能深入到社会的方方面面,当大多数人都掌握了利用人工智能满足个人需求的技能和意识时,对这项技术的监管就会成为一项重大挑战。随着一些社会日益两极分化,领导人如何确保不同群体不会利用人工智能对其他群体造成伤害?这样的问题只是冰山一角。人工智能将是最强大、最全能的普遍技术之一。这一特点给政府与其他部门设立防护栏带来了巨大困难,因为政府不可能监控每一个编写或使用的人工智能软件。
人类技术呈现出这样一种规律,即随着时间的推移,它们会变得更加精细、高效且易于使用,人工智能也不例外。就在几年前,人们还认为人工智能是科技公司与电脑高手才能使用的工具。现在,随着 ChatGPT 及其他开源软件的诞生,每个人几乎都能立即学会如何将人工智能融入日常学习、工作与休闲中。未来,人工智能可能会进一步发展,变得更加便捷,具有多用途功能。这种趋势是令人担忧的。在三到五年内,人工智能处理巨量信息和数据的能力将远远超过人脑。如果工程师们继续训练语言模型的创造力,那么有朝一日,人工智能将有可能开发出很少人才能彻底理解或处理的系统与计划。人们可能无法监管或警惕这些系统和计划,而这种情形这往往会导致灾难。2007 年至 2009 年发生的金融危机,部分原因就是华尔街的数学天才们开发出了看似非常强大的金融技术,但很少有人能预见到其潜在的风险和局限性。未来,人工智能也可能用同样的做法引发类似的灾难。
面对这些挑战,全世界必须团结起来,共同寻找解决方案。首先,最为关键的便是要建立防护栏,采取预防措施。预防意味着人工智能开发者必须放弃一些可能具有巨大不可预测性或危险性的技术片段与分支。例如,开发人员不应训练人工智能,使其能够自主性或拥有递归自我改进的能力。当然,这样的决定一定会有其机会成本与取舍。人们受到后见之明的影响,会感叹,由于开发者选择不开发某些能力,世界错失了如此多的良机。但目前,人工智能的发展速度如此之快,世界仍在学习权衡其决策的优与略、益处与害处。在领导者尚未建立适当的防范机制时,释放人工智能发展的潜在力量是轻率而危险的。
其次,人工智能的主要开发者应该分享他们的研究成果与实践,即使这些开发者可能在市场上拥有竞争关系。如果一家公司发现了模型中的漏洞,可以通过某种方法加以解决,那么它就应该与其他公司分享这一发现,从而避免共同风险。面对如此强大且不可预测的技术,应该进行广泛的合作与交流。如今,当全球经济问题出现时,各国都会相互沟通并发出警告。同样的模式也应适用于人工智能。每个人都应了解某种模式的优缺点。拥有成功经验的公司应鼓励其他公司采用类似的原则和做法,应对挑战需要集体努力。
第三,在西方世界,应该明确指出,人工智能暂时不能用于竞选。有些人确实夸大了人工智能对选举的影响。例如,认为人工智能将使虚假信息变成一个极度更加令人担忧的说法只是在一定程度上为正确的,因为在人工智能时代到来之前,虚假信息已经是民主国家的一个主要问题。然而,由于人工智能确实有可能打破民主国家的基石,因此小心谨慎是最明智的选择。政府还不能相信这些模型有能力驾驭复杂的政治,不受网络上带有偏见的评论的影响。目前,许多人工智能软件仍在提供种族主义、性别歧视与其他极端言论,这对政治和社会稳定极为不利。
最后,从宏观国际层面来看,遏制人工智能的发展是一项艰巨而繁重的工作。各国之间存在分歧,难以有效合作。如果一个国家希望排除某些行动,但其他国家不同意,该怎么办?因此,世界必须建立一套共同的价值观与一支有着相同意愿的联盟。各国必须认识到,即使有些国家可能不遵守这些价值观,但创建这些价值观并将其视为普遍的仍符合最佳集体利益,目前也没有更好的选择。同样重要的是建立新的机构,使其能够理解这项技术的飞速发展并做出反应,因为当前,各国政府似乎对人工智能的发展仍然反应迟钝,甚至一无所知。在这个问题上,世界必须有一个外部实体,利用人力、经济与技术资源来监控人工智能。这个机构必须获得公众的信任。没有信任,这些独立机构就没有真正的力量,控制人工智能的使命也自然变得更艰难,更具有宏观的严重风险。
- 本文标签: 原创
- 本文链接: http://www.jack-utopia.cn//article/626
- 版权声明: 本文由Jack原创发布,转载请遵循《署名-非商业性使用-相同方式共享 4.0 国际 (CC BY-NC-SA 4.0)》许可协议授权