Newsom Report Examines Use of AI by California Government

California Governor Gavin Newsom has released a report examining the beneficial uses and potential harms of artificial intelligence in state government. Potential plusses include improving access to government services by identifying groups that are hindered due to language barriers or other reasons, while dangers highlight the need to prepare citizens with next generation skills so they don’t get left behind in the GenAI economy. “This is an important first step in our efforts to fully understand the scope of GenAI and the state’s role in deploying it,” Newsom said, calling California’s strategy “a nuanced, measured approach.” Continue reading Newsom Report Examines Use of AI by California Government

CBS News Confirmed: New Fact-Checking Unit Examining AI

CBS is launching a unit charged with identifying misinformation and avoiding deepfakes. Called CBS News Confirmed, it will operate out of the news-and-stations division, ferreting out false information generated by artificial intelligence. Claudia Milne, senior VP of CBS News and Stations and its standards and practices chief will run the new group with Ross Dagan, EVP and head of news operations and transformation, CBS News and Stations. CBS plans to hire forensic journalists and will expand training and invest in technologies to assist them in their role. In addition to flagging deepfakes, CBS News Confirmed will also report on them. Continue reading CBS News Confirmed: New Fact-Checking Unit Examining AI

OpenAI Creates a Team to Examine Catastrophic Risks of AI

OpenAI recently announced it is developing formal AI risk guidelines and assembling a team dedicated to monitor and study threat assessment involving imminent “superintelligence” AI, also called frontier models. Topics under review include the required parameters for a robust monitoring and prediction framework and how malicious actors might want to leverage stolen AI model weights. The announcement was made shortly prior to the Biden administration issuing an executive order requiring the major players in artificial intelligence to submit reports to the federal government assessing potential risks associated with their models. Continue reading OpenAI Creates a Team to Examine Catastrophic Risks of AI

OpenAI Developing ‘Provenance Classifier’ for GenAI Images

OpenAI is developing an AI tool that can identify images created by artificial intelligence — specifically those made in whole or part by its Dall-E 3 image generator. Calling it a “provenance classifier,” company CTO Mira Murati began publicly discussing the detection app last week but said not to expect it in general release anytime soon. This, despite Murati’s claim it is “almost 99 percent reliable.” That is still not good enough for OpenAI, which knows there is much at stake when the public perception of artists’ work can be impacted by a filter applied by AI, which is notoriously capricious. Continue reading OpenAI Developing ‘Provenance Classifier’ for GenAI Images

Getty GenAI Tool for Images and Video Is Powered by Nvidia

Nvidia’s Picasso continues to gain market share among visual companies looking for an AI foundry to train models for generative use. Getty Images has partnered with Nvidia to create custom foundation models for still images and video. Generative AI by Getty Images lets customers create visuals using Getty’s library of licensed photos. The tool is trained on Getty’s own creative library and has the company’s guarantee of “full indemnification for commercial use.” Getty joins Shutterstock and Adobe among enterprise clients using Picasso. Runway and Cuebric are using it, too — and Picasso is still in development. Continue reading Getty GenAI Tool for Images and Video Is Powered by Nvidia

UK’s Competition Office Issues Principles for Responsible AI

The UK’s Competition and Markets Authority has issued a report featuring seven proposed principles that aim to “ensure consumer protection and healthy competition are at the heart of responsible development and use of foundation models,” or FMs. Ranging from “accountability” and “diversity” to “transparency,” the principles aim to “spur innovation and growth” while implementing social safety measures amidst rapid adoption of apps including OpenAI’s ChatGPT, Microsoft 365 Copilot, Stability AI’s Stable Diffusion. The transformative properties of FMs can “have a significant impact on people, businesses, and the UK economy,” according to the CMA. Continue reading UK’s Competition Office Issues Principles for Responsible AI

DHS Moves to ‘Master’ AI While Keeping It Safe, Trustworthy

The Department of Homeland Security is harnessing artificial intelligence, according to a memo by Secretary Alejandro Mayorkas explaining the department will use AI to keep Americans safe while implementing safeguards to ensure civil rights, privacy rights and the U.S. Constitution are not violated. The DHS appointed Eric Hysen as chief AI officer, moving him into the role from his previous post as CIO. “DHS must master this technology, applying it effectively and building a world class workforce that can reap the benefits of Al, while meeting the threats posed by adversaries that wield Al,” Mayorkas wrote. Continue reading DHS Moves to ‘Master’ AI While Keeping It Safe, Trustworthy

Governor Newsom Orders Study of GenAI Benefits and Risks

California Governor Gavin Newsom signed an executive order for state agencies to study artificial intelligence and its impact on society and the economy. “We’re only scratching the surface of understanding what GenAI is capable of,” Newsom suggested. Recognizing “both the potential benefits and risks these tools enable,” he said his administration is “neither frozen by the fears nor hypnotized by the upside.” The move was couched as a “measured approach” that will help California “focus on shaping the future of ethical, transparent, and trustworthy AI, while remaining the world’s AI leader.” Continue reading Governor Newsom Orders Study of GenAI Benefits and Risks

Google Introduces an AI Watermark That Cannot Be Removed

Google DeepMind and Google Cloud have teamed to launch what they claim is an indelible AI watermark tool, which if it works would mark an industry first. Called SynthID, the technique for identifying AI-generated images is being launched in beta. The technology embeds its digital watermark “directly into the pixels of an image, making it imperceptible to the human eye, but detectable for identification,” according to DeepMind. SynthID is being released to a limited number of Google’s Vertex AI customers using Imagen, a Google AI language model that generates photorealistic images. Continue reading Google Introduces an AI Watermark That Cannot Be Removed

YouTube Launches Music AI Incubator with UMG as Partner

YouTube is developing a plan for responsible AI that includes creating a framework to compensate recording artists and copyright holders for machine-generated music. YouTube’s Music AI Incubator — with support from early partner Universal Music Group — aims to help singers, songwriters, musicians and producers sort out issues like compensation and intellectual property protections, and work with trade groups and government officials on means of enforcement. YouTube CEO Neal Mohan says creators “have embraced AI to streamline and boost their creative processes,” with YouTube logging “more than 1.7 billion views of videos related to AI tools” this year. Continue reading YouTube Launches Music AI Incubator with UMG as Partner

Yaccarino: X Getting Video Calls with Its ‘Everything’ Rebrand

X is developing a video-calling feature to add as part of its rebranding as an “everything app.” X CEO Linda Yaccarino shared the news in her first television interview since leaving NBCUniversal to become head of Elon Musk’s social media platform in June, when the platform was still known as Twitter. Yaccarino said X users will soon be able to make video calls based on their social ID alone, without sharing phone numbers. Long-form videos, creator subscriptions and the ability to make payments on the platform are additional features that Yaccarino explained will be coming to X. Continue reading Yaccarino: X Getting Video Calls with Its ‘Everything’ Rebrand

Meta Plans Personality-Driven Chatbots to Boost Engagement

Meta Platforms is amping up its AI play, with plans to launch a suite of personality-driven chatbots as soon as next month. The company has been developing the series of artificially intelligent character bots with a goal of using them to boost engagement with its social media brands by making them available to have “humanlike discussions” on platforms including Facebook, Instagram and WhatsApp. Internally dubbed “personas,” the chatbots simulate characters ranging from historical figures like Abraham Lincoln to a surfer dude that dispenses travel advice. Continue reading Meta Plans Personality-Driven Chatbots to Boost Engagement

Artificial Intelligence Will Likely Impact the Future of TV News

Local TV news may soon undergo an AI-driven revolution that will make artificially-generated newscasts a reality nearly 40 years after digital anchor Max Headroom introduced the concept. Veteran newsman and author Hank Price predicts that while the transition is still a few years in the making, the process is already underway, with AI already being used to alter the voice and images of human anchors and offering the possibility to eventually create computer-generated newsreaders with their own personalities. Comparing the advent of newsroom AI to switching to robotic cameras, he says the move will be costly but save money over time. Continue reading Artificial Intelligence Will Likely Impact the Future of TV News

Top Tech Firms Support Government’s Planned AI Safeguards

President Biden has secured voluntary commitments from seven leading AI companies who say they will support the executive branch goal of advancing safe, secure and transparent development of artificial intelligence. Executives from Amazon, Anthropic, Google, Inflection, Meta, Microsoft and OpenAI convened at the White House on Friday to support the accord, which some criticized as a half measure, claiming the companies have already embraced independent security testing and a commitment to collaborating with each other and the government. Biden stressed the need to deploy AI altruistically, “to help address society’s greatest challenges.” Continue reading Top Tech Firms Support Government’s Planned AI Safeguards

Adobe Pursues Ethical, Responsible AI in the Creative Space

As a next step in its advances in ethical AI, Adobe has announced its Firefly generative AI platform now supports text prompts in more than 100 international languages. The company says its Firefly AI app has generated over one billion images in Firefly and Photoshop since implementation in March. Adobe has also deployed artificial intelligence in Express, Illustrator and the Creative Cloud. Positioning its latest news as an expansion of global proportions, Adobe’s generative AI products will now support text prompts in native dialects in the standalone Firefly web service, with localization coming to more than 20 additional languages. Continue reading Adobe Pursues Ethical, Responsible AI in the Creative Space