US Government Takes Steps to Establish AI Accountability Measures

AI Robot

Commerce Department Seeks Public Comment on AI Regulations

The US government is making progress on setting rules for artificial intelligence (AI) tools, seeking input from the public on how to create accountability measures. As the popularity of generative AI and chatbots continues to rise, the US commerce department is asking for assistance in advising policymakers on how to approach the technology. The goal is to establish mechanisms that ensure AI systems are trustworthy and address concerns related to privacy and transparency.

Alan Davidson: Public Feedback is Key

At a press conference held at the University of Pittsburgh, Alan Davidson, the head of the National Telecommunications and Information Administration (NTIA), emphasized the importance of public input. The NTIA is requesting feedback from researchers, industry groups, and privacy organizations on the development of audits and assessments for AI tools created by private industry. The aim is to establish guardrails that assess performance, safety, discrimination, misinformation, and privacy.

Biden Administration’s Efforts

The Biden administration has previously released a voluntary “bill of rights” that outlines five principles for developing AI systems. These principles focus on data privacy, protection against algorithmic discrimination, and transparency regarding the use of automated systems. The National Institute of Standards and Technology has also published a voluntary AI risk management framework to minimize harm to the public. Furthermore, federal agencies are examining how existing regulations may apply to AI.

US Catching Up to Europe

While Europe has been more proactive in implementing AI regulations, particularly with the proposed Artificial Intelligence Act, the US government has been historically slow to respond to rapidly advancing technologies. Tech companies in the US have had more freedom to collect and share user data, leading to concerns about privacy and data security. The lack of federal restrictions has also allowed AI tools like chatbots to be developed and released without a regulatory framework.

See also  Fine tuned Llama code model beats GPT-4 on code benchmark

Public Input for Responsible Regulation

The US government recognizes the need for public input in shaping a responsible AI regulatory framework. Government officials believe that appropriate guardrails can support innovation while addressing concerns about potential harm. Seeking feedback from the public allows for a balanced approach to AI regulation that fosters innovation and protects against negative consequences.

About Author

Teacher, programmer, AI advocate, fan of One Piece and pretends to know how to cook. Michael graduated Computer Science and in the years 2019 and 2020 he was involved in several projects coordinated by the municipal education department, where the focus was to introduce students from the public network to the world of programming and robotics. Today he is a writer at Wicked Sciences, but says that his heart will always belong to Python.