By
ETCentric StaffApril 1, 2024
The White House is implementing a new AI policy across the federal government that will be implemented by the Office of Management and Budget (OMB). Vice President Kamala Harris announced the new rules, which require that all federal agencies have a senior leader overseeing AI systems use, in an effort to ensure that AI deployed in public service remains safe and unbiased. The move was positioned as making good on “a core component” of President Biden’s AI Executive Order (EO), issued in October. Federal agencies reported completing the 150-day actions tasked by the EO. Continue reading Federal Policy Specifies Guidelines for Risk Management of AI
By
ETCentric StaffApril 1, 2024
Google.org, the charitable arm of the Alphabet giant, has launched a program to help fund non-profits working on technology to support “high-impact applications of generative AI.” The Google.org Accelerator: Generative AI is a six-month program that kicks off with more than $20 million in grants for 21 non-profit firms. Among them, student writing aid group Quill.org, job seeker for low- to middle-income countries Tabiya, and Benefits Data Trust, which helps low-income applicants access and enroll in public benefits. In addition to funds, the new unit provides mentorship, technical training and pro bono support from “a dedicated AI coach.” Continue reading Google GenAI Accelerator Launches with $20 Million in Grants
By
ETCentric StaffMarch 25, 2024
The United Nations General Assembly on Thursday adopted a U.S.-led resolution to promote “safe, secure and trustworthy” artificial intelligence systems and their sustainable development for the benefit of all. The non-binding proposal, which was adopted without a formal vote, drew support from more than 122 co-sponsors, including China and India. It emphasizes “the respect, protection and promotion of human rights in the design, development, deployment and use” of responsible and inclusive AI. “The same rights that people have offline must also be protected online, including throughout the life cycle of artificial intelligence systems,” the resolution affirms. Continue reading UN Adopts Global AI Resolution Backed by U.S., 122 Others
By
ETCentric StaffMarch 15, 2024
The European Union has passed the Artificial Intelligence Act, becoming the first global entity to pass comprehensive law to regulate AI’s development and use. Member states agreed on the framework in December 2023, and it was adopted Wednesday by the European Parliament with 523 votes in favor, 46 against and 49 abstentions. The legislation establishes what are being called “sweeping rules” for those building AI as well as those who deploy it. The rules, which will take effect gradually, implement new risk assessments, ban AI uses deemed “high risk,” and mandate transparency requirements. Continue reading EU Lawmakers Pass AI Act, World’s First Major AI Regulation
By
ETCentric StaffMarch 11, 2024
Artificial intelligence stakeholders are calling for safe harbor legal and technical protections that will allow them access to conduct “good-faith” evaluations of various AI products and services without fear of reprisal. More than 300 researchers, academics, creatives, journalists and legal professionals had as of last week signed an open letter calling on companies including Meta Platforms, OpenAI and Google to allow access for safety testing and red teaming of systems they say are shrouded in opaque rules and secrecy despite the fact that millions of consumers are already using them. Continue reading Researchers Call for Safe Harbor for the Evaluation of AI Tools
By
ETCentric StaffFebruary 14, 2024
The U.S. Patent and Trademark Office has issued revised guidance on patents for inventions created using artificial intelligence, a fast-developing category of intellectual property law. The advisory says patents may cover AI-assisted inventions in cases where “a natural person provided a significant contribution.” Insofar as what constitutes appropriately significant input, the agency is looking for the “right balance” between “awarding patent protection to promote human ingenuity and investment for AI-assisted inventions while not unnecessarily locking up innovation for future developments,” according to a USPTO blog post. Continue reading USPTO Says Only Humans Can Patent, Although AI May Assist
By
Paula ParisiJanuary 31, 2024
As parents and educators grapple with figuring out how AI will fit into education, OpenAI is preemptively acting to help answer that question, teaming with learning and child safety group Common Sense Media on informational material and recommended guidelines. The two will also work together to curate “family-friendly GPTs” for the GPT Store that are “based on Common Sense ratings and standards,” the organization said. The partnership aims “to help realize the full potential of AI for teens and families and minimize the risks,” according to Common Sense. Continue reading OpenAI Partners with Common Sense Media on AI Guidelines
By
Paula ParisiDecember 12, 2023
The EU has reached a provisional agreement on the Artificial Intelligence Act, making it the first Western democracy to establish comprehensive AI regulations. The sweeping new law predominantly focuses on so-called “high-risk AI,” establishing parameters — largely in the form of reporting and third-party monitoring — “based on its potential risks and level of impact.” Parliament and the 27-country European Council must still hold final votes before the AI Act is finalized and goes into effect, but the agreement, reached Friday in Brussels after three days of negotiations, means the main points are set. Continue reading EU Makes Provisional Agreement on Artificial Intelligence Act
By
Paula ParisiDecember 1, 2023
Sam Altman has wasted no time since being rehired as CEO of OpenAI on November 22, four days after being fired. This week, the 38-year-old leader of one of the most influential artificial intelligence firms outlined his “immediate priorities” and announced a newly constituted “initial board” that includes a non-voting seat for investor Microsoft. The three voting members thus far include former Salesforce co-CEO Bret Taylor as chairman and former U.S. Treasury Secretary Larry Summers — both newcomers — and sophomore Adam D’Angelo, CEO of Quora. Mira Murati, interim CEO during Altman’s brief absence, returns to her role as CTO. Continue reading Altman Reinstated as CEO of OpenAI, Microsoft Joins Board
By
Paula ParisiNovember 29, 2023
California Governor Gavin Newsom has released a report examining the beneficial uses and potential harms of artificial intelligence in state government. Potential plusses include improving access to government services by identifying groups that are hindered due to language barriers or other reasons, while dangers highlight the need to prepare citizens with next generation skills so they don’t get left behind in the GenAI economy. “This is an important first step in our efforts to fully understand the scope of GenAI and the state’s role in deploying it,” Newsom said, calling California’s strategy “a nuanced, measured approach.” Continue reading Newsom Report Examines Use of AI by California Government
By
Paula ParisiNovember 8, 2023
CBS is launching a unit charged with identifying misinformation and avoiding deepfakes. Called CBS News Confirmed, it will operate out of the news-and-stations division, ferreting out false information generated by artificial intelligence. Claudia Milne, senior VP of CBS News and Stations and its standards and practices chief will run the new group with Ross Dagan, EVP and head of news operations and transformation, CBS News and Stations. CBS plans to hire forensic journalists and will expand training and invest in technologies to assist them in their role. In addition to flagging deepfakes, CBS News Confirmed will also report on them. Continue reading CBS News Confirmed: New Fact-Checking Unit Examining AI
By
Paula ParisiNovember 2, 2023
OpenAI recently announced it is developing formal AI risk guidelines and assembling a team dedicated to monitor and study threat assessment involving imminent “superintelligence” AI, also called frontier models. Topics under review include the required parameters for a robust monitoring and prediction framework and how malicious actors might want to leverage stolen AI model weights. The announcement was made shortly prior to the Biden administration issuing an executive order requiring the major players in artificial intelligence to submit reports to the federal government assessing potential risks associated with their models. Continue reading OpenAI Creates a Team to Examine Catastrophic Risks of AI
By
Paula ParisiOctober 25, 2023
OpenAI is developing an AI tool that can identify images created by artificial intelligence — specifically those made in whole or part by its Dall-E 3 image generator. Calling it a “provenance classifier,” company CTO Mira Murati began publicly discussing the detection app last week but said not to expect it in general release anytime soon. This, despite Murati’s claim it is “almost 99 percent reliable.” That is still not good enough for OpenAI, which knows there is much at stake when the public perception of artists’ work can be impacted by a filter applied by AI, which is notoriously capricious. Continue reading OpenAI Developing ‘Provenance Classifier’ for GenAI Images
By
Paula ParisiOctober 2, 2023
Nvidia’s Picasso continues to gain market share among visual companies looking for an AI foundry to train models for generative use. Getty Images has partnered with Nvidia to create custom foundation models for still images and video. Generative AI by Getty Images lets customers create visuals using Getty’s library of licensed photos. The tool is trained on Getty’s own creative library and has the company’s guarantee of “full indemnification for commercial use.” Getty joins Shutterstock and Adobe among enterprise clients using Picasso. Runway and Cuebric are using it, too — and Picasso is still in development. Continue reading Getty GenAI Tool for Images and Video Is Powered by Nvidia
By
Paula ParisiSeptember 19, 2023
The UK’s Competition and Markets Authority has issued a report featuring seven proposed principles that aim to “ensure consumer protection and healthy competition are at the heart of responsible development and use of foundation models,” or FMs. Ranging from “accountability” and “diversity” to “transparency,” the principles aim to “spur innovation and growth” while implementing social safety measures amidst rapid adoption of apps including OpenAI’s ChatGPT, Microsoft 365 Copilot, Stability AI’s Stable Diffusion. The transformative properties of FMs can “have a significant impact on people, businesses, and the UK economy,” according to the CMA. Continue reading UK’s Competition Office Issues Principles for Responsible AI