EU Lawmakers Pass AI Act, World’s First Major AI Regulation

The European Union has passed the Artificial Intelligence Act, becoming the first global entity to pass comprehensive law to regulate AI’s development and use. Member states agreed on the framework in December 2023, and it was adopted Wednesday by the European Parliament with 523 votes in favor, 46 against and 49 abstentions. The legislation establishes what are being called “sweeping rules” for those building AI as well as those who deploy it. The rules, which will take effect gradually, implement new risk assessments, ban AI uses deemed “high risk,” and mandate transparency requirements. Continue reading EU Lawmakers Pass AI Act, World’s First Major AI Regulation

Researchers Call for Safe Harbor for the Evaluation of AI Tools

Artificial intelligence stakeholders are calling for safe harbor legal and technical protections that will allow them access to conduct “good-faith” evaluations of various AI products and services without fear of reprisal. More than 300 researchers, academics, creatives, journalists and legal professionals had as of last week signed an open letter calling on companies including Meta Platforms, OpenAI and Google to allow access for safety testing and red teaming of systems they say are shrouded in opaque rules and secrecy despite the fact that millions of consumers are already using them. Continue reading Researchers Call for Safe Harbor for the Evaluation of AI Tools

USPTO Says Only Humans Can Patent, Although AI May Assist

The U.S. Patent and Trademark Office has issued revised guidance on patents for inventions created using artificial intelligence, a fast-developing category of intellectual property law. The advisory says patents may cover AI-assisted inventions in cases where “a natural person provided a significant contribution.” Insofar as what constitutes appropriately significant input, the agency is looking for the “right balance” between “awarding patent protection to promote human ingenuity and investment for AI-assisted inventions while not unnecessarily locking up innovation for future developments,” according to a USPTO blog post. Continue reading USPTO Says Only Humans Can Patent, Although AI May Assist

OpenAI Partners with Common Sense Media on AI Guidelines

As parents and educators grapple with figuring out how AI will fit into education, OpenAI is preemptively acting to help answer that question, teaming with learning and child safety group Common Sense Media on informational material and recommended guidelines. The two will also work together to curate “family-friendly GPTs” for the GPT Store that are “based on Common Sense ratings and standards,” the organization said. The partnership aims “to help realize the full potential of AI for teens and families and minimize the risks,” according to Common Sense. Continue reading OpenAI Partners with Common Sense Media on AI Guidelines

EU Makes Provisional Agreement on Artificial Intelligence Act

The EU has reached a provisional agreement on the Artificial Intelligence Act, making it the first Western democracy to establish comprehensive AI regulations. The sweeping new law predominantly focuses on so-called “high-risk AI,” establishing parameters — largely in the form of reporting and third-party monitoring — “based on its potential risks and level of impact.” Parliament and the 27-country European Council must still hold final votes before the AI Act is finalized and goes into effect, but the agreement, reached Friday in Brussels after three days of negotiations, means the main points are set. Continue reading EU Makes Provisional Agreement on Artificial Intelligence Act

Altman Reinstated as CEO of OpenAI, Microsoft Joins Board

Sam Altman has wasted no time since being rehired as CEO of OpenAI on November 22, four days after being fired. This week, the 38-year-old leader of one of the most influential artificial intelligence firms outlined his “immediate priorities” and announced a newly constituted “initial board” that includes a non-voting seat for investor Microsoft. The three voting members thus far include former Salesforce co-CEO Bret Taylor as chairman and former U.S. Treasury Secretary Larry Summers — both newcomers — and sophomore Adam D’Angelo, CEO of Quora. Mira Murati, interim CEO during Altman’s brief absence, returns to her role as CTO. Continue reading Altman Reinstated as CEO of OpenAI, Microsoft Joins Board

Newsom Report Examines Use of AI by California Government

California Governor Gavin Newsom has released a report examining the beneficial uses and potential harms of artificial intelligence in state government. Potential plusses include improving access to government services by identifying groups that are hindered due to language barriers or other reasons, while dangers highlight the need to prepare citizens with next generation skills so they don’t get left behind in the GenAI economy. “This is an important first step in our efforts to fully understand the scope of GenAI and the state’s role in deploying it,” Newsom said, calling California’s strategy “a nuanced, measured approach.” Continue reading Newsom Report Examines Use of AI by California Government

CBS News Confirmed: New Fact-Checking Unit Examining AI

CBS is launching a unit charged with identifying misinformation and avoiding deepfakes. Called CBS News Confirmed, it will operate out of the news-and-stations division, ferreting out false information generated by artificial intelligence. Claudia Milne, senior VP of CBS News and Stations and its standards and practices chief will run the new group with Ross Dagan, EVP and head of news operations and transformation, CBS News and Stations. CBS plans to hire forensic journalists and will expand training and invest in technologies to assist them in their role. In addition to flagging deepfakes, CBS News Confirmed will also report on them. Continue reading CBS News Confirmed: New Fact-Checking Unit Examining AI

OpenAI Creates a Team to Examine Catastrophic Risks of AI

OpenAI recently announced it is developing formal AI risk guidelines and assembling a team dedicated to monitor and study threat assessment involving imminent “superintelligence” AI, also called frontier models. Topics under review include the required parameters for a robust monitoring and prediction framework and how malicious actors might want to leverage stolen AI model weights. The announcement was made shortly prior to the Biden administration issuing an executive order requiring the major players in artificial intelligence to submit reports to the federal government assessing potential risks associated with their models. Continue reading OpenAI Creates a Team to Examine Catastrophic Risks of AI

OpenAI Developing ‘Provenance Classifier’ for GenAI Images

OpenAI is developing an AI tool that can identify images created by artificial intelligence — specifically those made in whole or part by its Dall-E 3 image generator. Calling it a “provenance classifier,” company CTO Mira Murati began publicly discussing the detection app last week but said not to expect it in general release anytime soon. This, despite Murati’s claim it is “almost 99 percent reliable.” That is still not good enough for OpenAI, which knows there is much at stake when the public perception of artists’ work can be impacted by a filter applied by AI, which is notoriously capricious. Continue reading OpenAI Developing ‘Provenance Classifier’ for GenAI Images

Getty GenAI Tool for Images and Video Is Powered by Nvidia

Nvidia’s Picasso continues to gain market share among visual companies looking for an AI foundry to train models for generative use. Getty Images has partnered with Nvidia to create custom foundation models for still images and video. Generative AI by Getty Images lets customers create visuals using Getty’s library of licensed photos. The tool is trained on Getty’s own creative library and has the company’s guarantee of “full indemnification for commercial use.” Getty joins Shutterstock and Adobe among enterprise clients using Picasso. Runway and Cuebric are using it, too — and Picasso is still in development. Continue reading Getty GenAI Tool for Images and Video Is Powered by Nvidia

UK’s Competition Office Issues Principles for Responsible AI

The UK’s Competition and Markets Authority has issued a report featuring seven proposed principles that aim to “ensure consumer protection and healthy competition are at the heart of responsible development and use of foundation models,” or FMs. Ranging from “accountability” and “diversity” to “transparency,” the principles aim to “spur innovation and growth” while implementing social safety measures amidst rapid adoption of apps including OpenAI’s ChatGPT, Microsoft 365 Copilot, Stability AI’s Stable Diffusion. The transformative properties of FMs can “have a significant impact on people, businesses, and the UK economy,” according to the CMA. Continue reading UK’s Competition Office Issues Principles for Responsible AI

DHS Moves to ‘Master’ AI While Keeping It Safe, Trustworthy

The Department of Homeland Security is harnessing artificial intelligence, according to a memo by Secretary Alejandro Mayorkas explaining the department will use AI to keep Americans safe while implementing safeguards to ensure civil rights, privacy rights and the U.S. Constitution are not violated. The DHS appointed Eric Hysen as chief AI officer, moving him into the role from his previous post as CIO. “DHS must master this technology, applying it effectively and building a world class workforce that can reap the benefits of Al, while meeting the threats posed by adversaries that wield Al,” Mayorkas wrote. Continue reading DHS Moves to ‘Master’ AI While Keeping It Safe, Trustworthy

Governor Newsom Orders Study of GenAI Benefits and Risks

California Governor Gavin Newsom signed an executive order for state agencies to study artificial intelligence and its impact on society and the economy. “We’re only scratching the surface of understanding what GenAI is capable of,” Newsom suggested. Recognizing “both the potential benefits and risks these tools enable,” he said his administration is “neither frozen by the fears nor hypnotized by the upside.” The move was couched as a “measured approach” that will help California “focus on shaping the future of ethical, transparent, and trustworthy AI, while remaining the world’s AI leader.” Continue reading Governor Newsom Orders Study of GenAI Benefits and Risks

Google Introduces an AI Watermark That Cannot Be Removed

Google DeepMind and Google Cloud have teamed to launch what they claim is an indelible AI watermark tool, which if it works would mark an industry first. Called SynthID, the technique for identifying AI-generated images is being launched in beta. The technology embeds its digital watermark “directly into the pixels of an image, making it imperceptible to the human eye, but detectable for identification,” according to DeepMind. SynthID is being released to a limited number of Google’s Vertex AI customers using Imagen, a Google AI language model that generates photorealistic images. Continue reading Google Introduces an AI Watermark That Cannot Be Removed