By
Paula ParisiMay 15, 2024
France has been pursuing Big Tech and Microsoft and Amazon are among the first to express interest. Microsoft has committed $4.3 billion to expand cloud and AI infrastructure there, sharing plans to bring as many as 25,000 advanced GPUs to France by the close of 2025. The software giant will also train one million people for AI and data jobs while supporting 2,500 AI startups over the next three years. Meanwhile, Amazon announced that it would invest up to $1.3 billion to expand its existing footprint of 35 logistics facilities in the country. The deals were announced Monday during the Choose France summit hosted by French President Emmanuel Macron. Continue reading Microsoft, Amazon Commit to Expanding Operations in France
By
ETCentric StaffApril 3, 2024
The United States has entered into an agreement with the United Kingdom to collaboratively develop safety tests for the most advanced AI models. The memorandum of understanding aims at evaluating the societal and national defense risks posed by advanced models. Coming after commitments made at the AI Safety Summit in November, the deal is being described as the world’s first bilateral agreement on AI safety. The agreement, signed by U.S. Commerce Secretary Gina Raimondo and UK Technology Secretary Michelle Donelan, envisions the countries “working to align their scientific approaches” and to accelerate evaluations for AI models, systems and agents. Continue reading U.S. and UK Form Partnership to Accelerate AI Safety Testing
By
ETCentric StaffApril 2, 2024
OpenAI has debuted a new text-to-voice generation platform called Voice Engine, available in limited access. Voice Engine can generate a synthetic voice from a 15-second clip of someone’s voice. The synthetic voice can then read a provided text, even translating to other languages. For now, only a handful of companies are using the tech under a strict usage policy as OpenAI grapples with the potential for misuse. “These small scale deployments are helping to inform our approach, safeguards, and thinking about how Voice Engine could be used for good across various industries,” OpenAI explained. Continue reading OpenAI Voice Cloning Tool Needs Only a 15-Second Sample
By
ETCentric StaffApril 1, 2024
The White House is implementing a new AI policy across the federal government that will be implemented by the Office of Management and Budget (OMB). Vice President Kamala Harris announced the new rules, which require that all federal agencies have a senior leader overseeing AI systems use, in an effort to ensure that AI deployed in public service remains safe and unbiased. The move was positioned as making good on “a core component” of President Biden’s AI Executive Order (EO), issued in October. Federal agencies reported completing the 150-day actions tasked by the EO. Continue reading Federal Policy Specifies Guidelines for Risk Management of AI
By
ETCentric StaffApril 1, 2024
Google.org, the charitable arm of the Alphabet giant, has launched a program to help fund non-profits working on technology to support “high-impact applications of generative AI.” The Google.org Accelerator: Generative AI is a six-month program that kicks off with more than $20 million in grants for 21 non-profit firms. Among them, student writing aid group Quill.org, job seeker for low- to middle-income countries Tabiya, and Benefits Data Trust, which helps low-income applicants access and enroll in public benefits. In addition to funds, the new unit provides mentorship, technical training and pro bono support from “a dedicated AI coach.” Continue reading Google GenAI Accelerator Launches with $20 Million in Grants
By
ETCentric StaffMarch 25, 2024
The United Nations General Assembly on Thursday adopted a U.S.-led resolution to promote “safe, secure and trustworthy” artificial intelligence systems and their sustainable development for the benefit of all. The non-binding proposal, which was adopted without a formal vote, drew support from more than 122 co-sponsors, including China and India. It emphasizes “the respect, protection and promotion of human rights in the design, development, deployment and use” of responsible and inclusive AI. “The same rights that people have offline must also be protected online, including throughout the life cycle of artificial intelligence systems,” the resolution affirms. Continue reading UN Adopts Global AI Resolution Backed by U.S., 122 Others
By
ETCentric StaffMarch 15, 2024
The European Union has passed the Artificial Intelligence Act, becoming the first global entity to pass comprehensive law to regulate AI’s development and use. Member states agreed on the framework in December 2023, and it was adopted Wednesday by the European Parliament with 523 votes in favor, 46 against and 49 abstentions. The legislation establishes what are being called “sweeping rules” for those building AI as well as those who deploy it. The rules, which will take effect gradually, implement new risk assessments, ban AI uses deemed “high risk,” and mandate transparency requirements. Continue reading EU Lawmakers Pass AI Act, World’s First Major AI Regulation
By
ETCentric StaffMarch 11, 2024
Artificial intelligence stakeholders are calling for safe harbor legal and technical protections that will allow them access to conduct “good-faith” evaluations of various AI products and services without fear of reprisal. More than 300 researchers, academics, creatives, journalists and legal professionals had as of last week signed an open letter calling on companies including Meta Platforms, OpenAI and Google to allow access for safety testing and red teaming of systems they say are shrouded in opaque rules and secrecy despite the fact that millions of consumers are already using them. Continue reading Researchers Call for Safe Harbor for the Evaluation of AI Tools
By
ETCentric StaffFebruary 14, 2024
The U.S. Patent and Trademark Office has issued revised guidance on patents for inventions created using artificial intelligence, a fast-developing category of intellectual property law. The advisory says patents may cover AI-assisted inventions in cases where “a natural person provided a significant contribution.” Insofar as what constitutes appropriately significant input, the agency is looking for the “right balance” between “awarding patent protection to promote human ingenuity and investment for AI-assisted inventions while not unnecessarily locking up innovation for future developments,” according to a USPTO blog post. Continue reading USPTO Says Only Humans Can Patent, Although AI May Assist
By
Paula ParisiJanuary 31, 2024
As parents and educators grapple with figuring out how AI will fit into education, OpenAI is preemptively acting to help answer that question, teaming with learning and child safety group Common Sense Media on informational material and recommended guidelines. The two will also work together to curate “family-friendly GPTs” for the GPT Store that are “based on Common Sense ratings and standards,” the organization said. The partnership aims “to help realize the full potential of AI for teens and families and minimize the risks,” according to Common Sense. Continue reading OpenAI Partners with Common Sense Media on AI Guidelines
By
Paula ParisiDecember 12, 2023
The EU has reached a provisional agreement on the Artificial Intelligence Act, making it the first Western democracy to establish comprehensive AI regulations. The sweeping new law predominantly focuses on so-called “high-risk AI,” establishing parameters — largely in the form of reporting and third-party monitoring — “based on its potential risks and level of impact.” Parliament and the 27-country European Council must still hold final votes before the AI Act is finalized and goes into effect, but the agreement, reached Friday in Brussels after three days of negotiations, means the main points are set. Continue reading EU Makes Provisional Agreement on Artificial Intelligence Act
By
Paula ParisiDecember 1, 2023
Sam Altman has wasted no time since being rehired as CEO of OpenAI on November 22, four days after being fired. This week, the 38-year-old leader of one of the most influential artificial intelligence firms outlined his “immediate priorities” and announced a newly constituted “initial board” that includes a non-voting seat for investor Microsoft. The three voting members thus far include former Salesforce co-CEO Bret Taylor as chairman and former U.S. Treasury Secretary Larry Summers — both newcomers — and sophomore Adam D’Angelo, CEO of Quora. Mira Murati, interim CEO during Altman’s brief absence, returns to her role as CTO. Continue reading Altman Reinstated as CEO of OpenAI, Microsoft Joins Board
By
Paula ParisiNovember 29, 2023
California Governor Gavin Newsom has released a report examining the beneficial uses and potential harms of artificial intelligence in state government. Potential plusses include improving access to government services by identifying groups that are hindered due to language barriers or other reasons, while dangers highlight the need to prepare citizens with next generation skills so they don’t get left behind in the GenAI economy. “This is an important first step in our efforts to fully understand the scope of GenAI and the state’s role in deploying it,” Newsom said, calling California’s strategy “a nuanced, measured approach.” Continue reading Newsom Report Examines Use of AI by California Government
By
Paula ParisiNovember 8, 2023
CBS is launching a unit charged with identifying misinformation and avoiding deepfakes. Called CBS News Confirmed, it will operate out of the news-and-stations division, ferreting out false information generated by artificial intelligence. Claudia Milne, senior VP of CBS News and Stations and its standards and practices chief will run the new group with Ross Dagan, EVP and head of news operations and transformation, CBS News and Stations. CBS plans to hire forensic journalists and will expand training and invest in technologies to assist them in their role. In addition to flagging deepfakes, CBS News Confirmed will also report on them. Continue reading CBS News Confirmed: New Fact-Checking Unit Examining AI
By
Paula ParisiNovember 2, 2023
OpenAI recently announced it is developing formal AI risk guidelines and assembling a team dedicated to monitor and study threat assessment involving imminent “superintelligence” AI, also called frontier models. Topics under review include the required parameters for a robust monitoring and prediction framework and how malicious actors might want to leverage stolen AI model weights. The announcement was made shortly prior to the Biden administration issuing an executive order requiring the major players in artificial intelligence to submit reports to the federal government assessing potential risks associated with their models. Continue reading OpenAI Creates a Team to Examine Catastrophic Risks of AI