Anthropic, a US artificial intelligence company, has unveiled its newest chatbot, Claude 2, which has the ability to summarize lengthy texts and operates according to a set of safety principles derived from sources such as the Universal Declaration of Human Rights. With concerns rising over the safety and societal impact of AI, Anthropic’s chatbot aims to address these issues.
The Constitutional AI Approach
Anthropic refers to its safety method as “Constitutional AI,” employing a collection of principles to guide its judgments in producing text. This approach involves training the chatbot on various documents, including the UN declaration from 1948 and Apple’s terms of service, which cover important modern topics such as data privacy and impersonation. One principle, based on the UN declaration, prompts users to select the response that aligns most with freedom, equality, and a sense of brotherhood.
Dr. Andrew Rogoyski from the Institute for People-Centred AI at the University of Surrey compares Anthropic’s methodology to Isaac Asimov’s three laws of robotics, emphasizing the inclusion of principled responses that enhance the chatbot’s safety.
Claude 2: An Advancement in Chatbot Technology
Claude 2 is Anthropic’s answer to the successful launch of ChatGPT by US rival OpenAI. Following in its footsteps, other major tech companies like Microsoft and Google have also entered the chatbot market. However, Anthropic distinguishes itself by implementing its own safety principles in Claude 2, paving the way for responsible and secure AI models.
Anthropic’s CEO, Dario Amodei, has had discussions on AI safety with UK Chancellor Rishi Sunak and US Vice-President Kamala Harris as part of delegations from the tech industry. As a signatory of a statement by the Center for AI Safety, Amodei believes that addressing the risks associated with AI should be given the same global priority as handling pandemics and nuclear war.
Claude 2: Handling Complex Texts with Ease
Claude 2 boasts an impressive capability to summarize text blocks of up to 75,000 words, similar in size to Sally Rooney’s novel Normal People. The Guardian put Claude 2 to the test, asking it to condense a 15,000-word report on AI by the Tony Blair Institute for Global Change into 10 concise bullet points, a task it accomplished in under one minute.
However, it is important to note that while Claude 2 exhibits great potential, it is not flawless. There have been instances where the chatbot has made factual errors or “hallucinations.” For example, it mistakenly stated that AS Roma won the 2023 Europa Conference League instead of West Ham United. Similarly, when asked about the 2014 Scottish independence referendum, Claude 2 incorrectly claimed that every local council area voted “no,” disregarding areas that voted for independence. See Anthropic for more information about Claude 2