By
Paula ParisiNovember 29, 2023
California Governor Gavin Newsom has released a report examining the beneficial uses and potential harms of artificial intelligence in state government. Potential plusses include improving access to government services by identifying groups that are hindered due to language barriers or other reasons, while dangers highlight the need to prepare citizens with next generation skills so they don’t get left behind in the GenAI economy. “This is an important first step in our efforts to fully understand the scope of GenAI and the state’s role in deploying it,” Newsom said, calling California’s strategy “a nuanced, measured approach.” Continue reading Newsom Report Examines Use of AI by California Government
By
Paula ParisiNovember 8, 2023
CBS is launching a unit charged with identifying misinformation and avoiding deepfakes. Called CBS News Confirmed, it will operate out of the news-and-stations division, ferreting out false information generated by artificial intelligence. Claudia Milne, senior VP of CBS News and Stations and its standards and practices chief will run the new group with Ross Dagan, EVP and head of news operations and transformation, CBS News and Stations. CBS plans to hire forensic journalists and will expand training and invest in technologies to assist them in their role. In addition to flagging deepfakes, CBS News Confirmed will also report on them. Continue reading CBS News Confirmed: New Fact-Checking Unit Examining AI
By
Paula ParisiSeptember 25, 2023
As creators embrace artificial intelligence to juice creativity, TikTok is launching a tool that helps them label their AI-generated content while also beginning to test “ways to label AI-generated content automatically.” “AI enables incredible creative opportunities, but can potentially confuse or mislead viewers,” TikTok said in announcing labels that can apply to “any content that has been completely generated or significantly edited by AI,” including video, photographs, music and more. The platform also touted a policy that “requires people to label AI-generated content that contains realistic images, audio or video, in order to help viewers contextualize.” Continue reading TikTok Creates New Tools for Labeling Content Created by AI
By
Paula ParisiSeptember 19, 2023
The UK’s Competition and Markets Authority has issued a report featuring seven proposed principles that aim to “ensure consumer protection and healthy competition are at the heart of responsible development and use of foundation models,” or FMs. Ranging from “accountability” and “diversity” to “transparency,” the principles aim to “spur innovation and growth” while implementing social safety measures amidst rapid adoption of apps including OpenAI’s ChatGPT, Microsoft 365 Copilot, Stability AI’s Stable Diffusion. The transformative properties of FMs can “have a significant impact on people, businesses, and the UK economy,” according to the CMA. Continue reading UK’s Competition Office Issues Principles for Responsible AI
By
Paula ParisiJuly 14, 2023
Google has updated transaction policies to allow for blockchain-based digital content, such as NFTs, to be placed within content distributed through its mobile software marketplace Google Play. Google has been slow to warm to blockchain integration, and the new approach comes with strict transparency requirements. If tokenized digital assets are part of an app or game “developers must declare this clearly,” Google explains, adding that “developers may not promote or glamorize any potential earning from playing or trading activities.” These stipulations intend to prevent the hype that has attached itself to so much blockchain activity from infiltrating Google Play. Continue reading Google Updates Policies Regarding Blockchain in Play Store
By
Paula ParisiJune 26, 2023
The Federal Communications Commission proposed a rule that would require cable TV and multichannel satellite services to disclose full pricing for programming plans in consumer promotional materials and invoicing, a plan President Biden quickly endorsed. The intent is to clearly convey “all-in” costs as a prominent single line, avoiding taxes and surcharges excluded from sales pitches and sometimes difficult to decipher on bills. “Too often, these companies hide additional junk fees on customer bills disguised as ‘broadcast TV’ or ‘regional sports’ fees that in reality pay for no additional services,” Biden said. Continue reading Biden Supports FCC Plan for Multichannel Price Disclosures
By
Paula ParisiJune 26, 2023
Senate Majority Leader Chuck Schumer unveiled his approach toward regulating artificial intelligence, beginning with nine listening sessions to explore topics including AI’s impact on the job market, copyright, national security and “doomsday scenarios.” Schumer’s plan — the SAFE (Security, Accountability, Foundations, Explainability) Innovation framework — isn’t proposed legislation, but a discovery roadmap. Set to begin in September, the panels will draw on members of industry, academia and civil society. “Experts aren’t even sure which questions policymakers should be asking,” said Schumer of the learning curve. “In many ways, we’re starting from scratch.” Continue reading Schumer Shares Plan for SAFE AI Senate Listening Sessions
By
Paula ParisiJune 26, 2023
IBM and Adobe are expanding their partnership to help enterprise clients accelerate their content supply chains using artificial intelligence including Adobe Sensei GenAI, released in March, and Adobe Firefly, now in beta, as well as Adobe’s generative AI models. IBM says it will create a portfolio of Adobe-specific consulting services. Leveraging Adobe’s AI solutions and IBM Consulting services, the companies aim to “help clients build an integrated content supply chain ecosystem that drives collaboration, optimizes creativity, increases speed, automates tasks and enhances stakeholders’ visibility across design and creative projects.” Continue reading IBM and Adobe Advance AI Content Workflow for Enterprise
By
Paula ParisiJune 16, 2023
The European Parliament on Wednesday took a major step to legislate artificial intelligence, passing a draft of the AI Act, which puts restrictions on many of what are believed to be the technology’s riskiest uses. The EU has been leading the world in advancing AI regulation, and observers are already citing this developing law as a model framework for global policymakers eager to place guardrails on this rapidly advancing technology. Among the Act’s key tenets: it will dramatically curtail use of facial recognition software and require AI firms such as OpenAI to disclose more about their training data. Continue reading European Union Takes Steps to Regulate Artificial Intelligence
By
Paula ParisiMay 4, 2023
The European Union’s Digital Markets Act, applicable as of May 1, finds tech giants scrambling to anticipate regional compliance. The regulatory framework aims to ensure tech giants don’t abuse their clout by taking advantage of consumers and smaller companies. Within two months, companies providing core platform services will have to notify the European Commission and provide all relevant information. The Commission will then have two months to identify companies that fit the DMA definition of “gatekeeper.” Those that do will be subject to DMA rules and have six months to conform. Continue reading Big Tech Braces for Potential Impact of EU Digital Markets Act
By
Paula ParisiApril 25, 2023
The European Union, which has been working on artificial intelligence legislation for the past two years, is playing last minute catch-up with rapidly evolving technology as it retools a final draft law that can be adopted, possibly by the end of the year. While the European Council in December thought it had completed its framework in all but the details, that version largely deferred attaching specific rules to generative AI, which having since exploded, has triggered a movement among member states to add those guardrails along with rules for general purpose AI. Continue reading EU Considers Technology Updates for Next Draft of the AI Act
By
Paula ParisiApril 3, 2023
Google is launching an Ads Transparency Center. The “searchable hub” rolls out to global users in the coming weeks and lets anyone look up who’s behind an ad, which ads an advertiser ran and where across Google Search, YouTube and the Google Display Network. Additional details are provided for political ads, including the amount spent, number of impressions and any location targeting criteria. In 2020 Google began requiring that advertisers verify their identities, and a year later began letting users access some ad info, but its transparency move follows Facebook’s similar offering, which launched in 2019. Continue reading Google Ads Transparency Center Offers Searchable Ad Data
By
Paula ParisiMarch 22, 2023
The Human Artistry Campaign launched at South by Southwest (SXSW) last week with a goal “to ensure artificial intelligence technologies are developed and used in ways that support human culture and artistry — and not ways that replace or erode it.” With support from over 40 industry organizations — including the Recording Academy, SAG-AFTRA and the Recording Industry Association of America (RIAA) — the coalition outlined principles advocating AI best practices, emphasizing “respect for artists, their work, and their personas; transparency; and adherence to existing law including copyright and intellectual property.” Continue reading Music Industry and Copyright Office Advance Positions on AI
By
Paula ParisiMarch 13, 2023
The European Union’s implementation of the Digital Services Act (DSA) and the Digital Markets Act (DMA) is poised to trigger worldwide changes on familiar platforms like Google, Instagram, Wikipedia and YouTube. The DSA addresses consumer safety while the DMA deals with antitrust issues. Proponents say the new laws will help end the era of self-regulating tech companies. Although as in the U.S., the DSA makes clear that platforms aren’t liable for illegal user-generated content. Unlike U.S. law, the DSA does allow users to sue when tech firms are made aware of harmful content but fail to remove it. Continue reading Changes Ahead for Big Tech When EU Regulations Enforced
By
Paula ParisiFebruary 24, 2023
Meta Platforms is reforming its penalty system for Facebook policy violations. Based on recommendations from its Oversight Board, the company will focus more on educating users and less on punitive measures like suspending accounts or limiting posts. “While we are still removing violating content just as before,” explains Meta VP of content policy Monika Bickert, “under our new system we will focus more on helping people understand why we have removed their content, which is shown to help prevent re-offending, rather than so quickly restricting their ability to post.” The goal is fairer and more effective content moderation on Facebook. Continue reading Meta’s Penalty Reforms Designed to Be More Effective, Fair