CFOtech Asia - Technology news for CFOs & financial decision-makers
Story image

Intel joins new MLCommons working group to develop AI safety benchmarks

Fri, 27th Oct 2023
FYI, this story is more than a year old

Intel has announced its association with the newly-formed MLCommons AI Safety (AIS) working group, a collaborative effort of industry and academia experts in artificial intelligence. As a founding member, Intel will contribute its expertise and knowledge to develop a flexible platform for benchmarks that measure the safety and the risk factors of AI tools and models. As the working group evolves, these standard AI safety benchmarks will become a key resource in guiding AI deployment and safety.

Intel is devoted to responsibly advancing AI and making it accessible to all. Highlighting the importance of safety, Deepak Patil, Intel corporate vice president and general manager, Data Center AI Solutions said, "We approach safety concerns holistically and develop innovations across hardware and software to enable the ecosystem to build trustworthy AI. Due to the ubiquity and pervasiveness of large language models, it is crucial to work across the ecosystem to address safety concerns in the development and deployment of AI. To this end, we're pleased to join the industry in defining the new processes, methods, and benchmarks to improve AI everywhere."

The focus on responsible training and deployment of large language models (LLMs) and tools is central to the working group's operations. This approach helps mitigate the societal risks posed by these potent technologies, an issue long recognised by Intel especially in relation to the ethical and human rights implications associated with the development of technology. The working group will provide a safety rating system to evaluate the risks posed by new, rapidly evolving AI technologies.

The AI Safety working group, organised by MLCommons, includes a multidisciplinary group of AI experts. They aim to develop a platform and pool of tests from contributors to support AI safety benchmarks for diverse use cases. Intel's participation in the AIS working group aligns with its ongoing commitment to responsibly advance AI technologies.

As part of its participation, Intel aims to share AI safety findings and best practices and processes for responsible development such as red-teaming and safety tests. The initial focus of the working group will be to develop safety benchmarks for LLMs, building on the groundwork laid by researchers at Stanford University's Center for Research on Foundation Models and its Holistic Evaluation of Language Models (HELM). Intel plans to share its rigorous, multidisciplinary review processes used internally to develop AI models and tools with the AIS working group. This will help establish a common set of best practices and benchmarks to evaluate the safe development and deployment of generative AI tools leveraging LLMs.

Follow us on:
Follow us on LinkedIn Follow us on X
Share on:
Share on LinkedIn Share on X