OpenAI Creates a Team to Examine Catastrophic Risks of AI

OpenAI recently announced it is developing formal AI risk guidelines and assembling a team dedicated to monitor and study threat assessment involving imminent “superintelligence” AI, also called frontier models. Topics under review include the required parameters for a robust monitoring and prediction framework and how malicious actors might want to leverage stolen AI model weights. The announcement was made shortly prior to the Biden administration issuing an executive order requiring the major players in artificial intelligence to submit reports to the federal government assessing potential risks associated with their models. Continue reading OpenAI Creates a Team to Examine Catastrophic Risks of AI

Google Taps AI for Tools to Help Authenticate Search Results

Google is rolling out three new tools to verify images and search results. “About this image,” Fact Check Explorer and Search Generative Experience (SGE) all add context to Google Search results. “About this image” is rolling out globally to English-language users as part of the Google Search UI. Available in beta since summer, Fact Check Explorer will let journalists and professional fact checkers delve into an image or topic more deeply via API. Search Generative Experience lets GenAI investigate and share results about websites by populating source descriptions for some targets that will appear in “more about this page.” Continue reading Google Taps AI for Tools to Help Authenticate Search Results

Woodpecker: Chinese Researchers Combat AI Hallucinations

The University of Science and Technology of China (USTC) and Tencent YouTu Lab have released a research paper on a new framework called Woodpecker, designed to correct hallucinations in multimodal large language AI models. “Hallucination is a big shadow hanging over the rapidly evolving MLLMs,” writes the group, describing the phenomenon as when MLLMs “output descriptions that are inconsistent with the input image.” Solutions to date focus mainly on “instruction-tuning,” a form of retraining that is data and computation intensive. Woodpecker takes a training-free approach that purports to correct hallucinations from the basis of the generated text. Continue reading Woodpecker: Chinese Researchers Combat AI Hallucinations

OpenAI’s Latest Version of DALL-E Integrates with ChatGPT

OpenAI has released the DALL-E 3 generative AI imaging platform in research preview. The latest iteration features more safety options and integrates with OpenAI’s ChatGPT, currently driven by the now seasoned large language model GPT-4. That is the ChatGPT version to which Plus subscribers and enterprise customers have access — the same who will be able to preview DALL-E 3. The free chatbot is built around GPT-3.5. OpenAI says GPT-4 makes for better contextual understanding by DALL-E, which even in version 2 evidenced some glaring comprehension glitches. Continue reading OpenAI’s Latest Version of DALL-E Integrates with ChatGPT

Governor Newsom Orders Study of GenAI Benefits and Risks

California Governor Gavin Newsom signed an executive order for state agencies to study artificial intelligence and its impact on society and the economy. “We’re only scratching the surface of understanding what GenAI is capable of,” Newsom suggested. Recognizing “both the potential benefits and risks these tools enable,” he said his administration is “neither frozen by the fears nor hypnotized by the upside.” The move was couched as a “measured approach” that will help California “focus on shaping the future of ethical, transparent, and trustworthy AI, while remaining the world’s AI leader.” Continue reading Governor Newsom Orders Study of GenAI Benefits and Risks

AP Is Latest Org to Issue Guidelines for AI in News Reporting

After announcing a partnership with OpenAI last month, the Associated Press has issued guidelines for using generative AI in news reporting, urging caution in using artificial intelligence. The news agency has also added a new chapter in its widely used AP Stylebook pertaining to coverage of AI, a story that “goes far beyond business and technology” and is “also about politics, entertainment, education, sports, human rights, the economy, equality and inequality, international law, and many other issues,” according to AP, which says stories about AI should “show how these tools are affecting many areas of our lives.” Continue reading AP Is Latest Org to Issue Guidelines for AI in News Reporting

FTC Investigates OpenAI Over Data Policies, Misinformation

The Federal Trade Commission has opened a civil investigation into OpenAI to determine the extent to which its data policies are harmful to consumers as well as the potentially deleterious effects of misinformation spread through “hallucinations” by its ChatGPT chatbot. The FTC sent OpenAI dozens of questions last week in a 20-page letter instructing the company to contact FTC counsel “as soon as possible to schedule a telephonic meeting within 14 days.” The questions deal with everything from how the company trains its models to the handling of personal data. Continue reading FTC Investigates OpenAI Over Data Policies, Misinformation

OpenAI Launches a Task Force to Control Superintelligent AI

OpenAI believes artificial intelligence exceeding human intelligence “could arrive this decade.” Calling the massive compute power “superintelligence rather than AGI to stress a much higher capability level,” the company warns that even though this new cognition holds great promise it will not necessarily be benevolent. Preparing for the worst, OpenAI has formed an internal unit charged with developing ways to keep superintelligent AI in check. Led by OpenAI’s Ilya Sutskever and Jan Leike, the Superalignment Team will work toward “steering or controlling a potentially superintelligent AI and preventing it from going rogue.” Continue reading OpenAI Launches a Task Force to Control Superintelligent AI

G7 Leaders Call for Global AI Standards at Hiroshima Summit

Leaders at the G7 Summit in Hiroshima, Japan, are calling for discussions that could lead to global standards and regulations for generative AI, with the aim of responsible use of the technology. The chief executives of the world’s largest economies — which in addition to the host nation include Canada, France, Germany, Italy, the UK, the U.S. (and additionally the EU) — expressed the goal of forming a G7 working group to establish by the end of the year a “Hiroshima AI process” for discussion about uniform policies for dealing with AI technologies including chatbots and image generators. Continue reading G7 Leaders Call for Global AI Standards at Hiroshima Summit

AI Content Farms Spreading Fake Stories and Misinformation

The proliferation of websites spewing misinformation as a result of chatbot-powered “content farms” is creating increased concern. Misinformation tracker NewsGuard has identified 49 websites publishing falsehoods authored by generative AI. The discovery is raising questions as to the technology’s role in turbocharging existing fraud techniques. Several of the offending websites sprang up this year, just as AI tools were made widely available for use by the public. Some of the sites take the approach of masquerading as breaking news sites, while others have adopted tactics such as using generic-sounding names. Continue reading AI Content Farms Spreading Fake Stories and Misinformation

Changes Ahead for Big Tech When EU Regulations Enforced

The European Union’s implementation of the Digital Services Act (DSA) and the Digital Markets Act (DMA) is poised to trigger worldwide changes on familiar platforms like Google, Instagram, Wikipedia and YouTube. The DSA addresses consumer safety while the DMA deals with antitrust issues. Proponents say the new laws will help end the era of self-regulating tech companies. Although as in the U.S., the DSA makes clear that platforms aren’t liable for illegal user-generated content. Unlike U.S. law, the DSA does allow users to sue when tech firms are made aware of harmful content but fail to remove it. Continue reading Changes Ahead for Big Tech When EU Regulations Enforced

Meta’s Penalty Reforms Designed to Be More Effective, Fair

Meta Platforms is reforming its penalty system for Facebook policy violations. Based on recommendations from its Oversight Board, the company will focus more on educating users and less on punitive measures like suspending accounts or limiting posts. “While we are still removing violating content just as before,” explains Meta VP of content policy Monika Bickert, “under our new system we will focus more on helping people understand why we have removed their content, which is shown to help prevent re-offending, rather than so quickly restricting their ability to post.” The goal is fairer and more effective content moderation on Facebook. Continue reading Meta’s Penalty Reforms Designed to Be More Effective, Fair

YouTube CEO Wojcicki Steps Down After 25 Years at Google

After nine years as CEO of the world’s largest video-sharing platform, Susan Wojcicki announced last week that she was stepping down from YouTube, to be replaced by the company’s chief product officer Neal Mohan. The move comes after nearly 25 years of working for parent company Google, where she started as its first marketing manager (founders Larry Page and Sergey Brin famously set up Google’s early office space in Wojcicki’s Menlo Park garage). Wojcicki is known for leading the charge to acquire YouTube, co-creating Google Image Search and helping to launch AdSense, among numerous other accomplishments. YouTube’s number of average daily users has more than doubled under her leadership and content has expanded with new services such as YouTube TV, YouTube Premium and YouTube Music. Continue reading YouTube CEO Wojcicki Steps Down After 25 Years at Google

Disinformation Rising on Social Platforms as Policing Wanes

Social media companies appear to be reducing efforts to combat misinformation at a time when the capabilities to foist false narratives is reaching new levels of sophistication. As a result of staff cuts at Alphabet, Google’s YouTube subsidiary is reportedly left with one person overseeing worldwide misinformation policy. Twitter eliminated its safety and trust division, while Meta also made changes to its disinformation filtering. Meanwhile, The Guardian has unearthed Israeli misinformation contractors operating under the name “Team Jorge” that says it manipulated more than 30 presidential elections worldwide. Continue reading Disinformation Rising on Social Platforms as Policing Wanes

Google Touts Search Plans During Its ‘Live from Paris’ Event

Google unveiled new search features during its “Live from Paris” event via a YouTube stream. The emphasis was on multisearch, which will go live globally to mobile platforms in more than 70 languages where Google Lens is used, according to the company. Introduced last year, the multisearch feature looks through images and text, driven by an AI technology the company has developed called MUM, for Multitask Unified Model. There were no new announcements regarding Bard, Google’s new conversational AI search tool, although media outlets reported that Bard responded incorrectly in a Twitter promo the same day. Continue reading Google Touts Search Plans During Its ‘Live from Paris’ Event