By
Paula ParisiNovember 20, 2024
Capitulating to outside pressure, after a barrage of media reports citing unsafe conditions for minors, Roblox is implementing new safeguards. Parents can now access parental controls from their own devices in addition to their child’s device and monitor their child’s screen time. New content labels and improvements to how users under age 13 can communicate on Roblox are additional protections that are now baked into the platform. “We’ve spent nearly two decades building strong safety systems, but we are always evolving our systems as new technology becomes available,” explained the Roblox. Continue reading Roblox Tightens Child Safety Guidelines Amidst Media Outcry
By
Paula ParisiOctober 28, 2024
President Biden issued the first-ever National Security Memorandum on Artificial Intelligence, outlining how the Pentagon, intelligence agencies and various national security groups should use artificial intelligence technology to advance national interests and deter threats, touching on everything from nuclear weapons to the supply chain. “The NSM is designed to galvanize federal government adoption of AI to advance the national security mission, including by ensuring that such adoption reflects democratic values and protects human rights, civil rights, civil liberties and privacy,” the White House announced in a statement. Continue reading The White House Defines Government Objectives Involving AI
By
Paula ParisiOctober 25, 2024
The Federal Trade Commission rule targeting fake reviews and paid testimonials went into effect this week. The rule bans the creation, purchase or sale of reviews and opinion pieces attributed to fictional customers, or real ones who are financially compensated without plainly disclosing the transactional nature of the relationship. The rule, which subjects offenders to civil penalties, also takes aim at businesses who use threats or coercion to thwart the publication of negative reviews that are genuine. The new FTC rule was approved by unanimous vote in August. Continue reading FTC Rule Prohibiting Fake and Paid Reviews Goes into Effect
By
Paula ParisiJuly 10, 2024
Apple has approved the Epic Games Store app for iOS and the App Store in the EU. But the battle apparently continues, with Apple couching the move as “temporary,” and Epic founder and CEO Tim Sweeney vowing to fight any reversals. Sweeney says Apple is “demanding we change the buttons in the next version — which would make our store less standard and harder to use. We’ll fight this.” Even a temporary toehold moves Sweeney — whose Maryland-based Epic Games is home to the popular “Fortnite” — closer to its goal of an alt game store on the insular Apple platform at home and abroad. Continue reading Apple Issues ‘Temporary’ Epic Game Store Approval for iOS
By
ETCentric StaffFebruary 14, 2024
The global augmented reality market is expected to reach $289 billion by 2030, according to a recent study by Research and Markets, and advertisers have taken notice. While the majority of revenue is generated by software, then hardware, the augmented reality advertising market is projected to generate $1.2 billion in revenue in the U.S. in 2024, according to the Interactive Advertising Bureau. To help foster growth in that nascent sector, the IAB has teamed with the Media Ratings Council to create consistent definitions and measurement guidelines for ads within AR campaigns. Continue reading IAB and MRC Join Forces to Develop AR Advertising Guidelines
By
Paula ParisiJanuary 31, 2024
As parents and educators grapple with figuring out how AI will fit into education, OpenAI is preemptively acting to help answer that question, teaming with learning and child safety group Common Sense Media on informational material and recommended guidelines. The two will also work together to curate “family-friendly GPTs” for the GPT Store that are “based on Common Sense ratings and standards,” the organization said. The partnership aims “to help realize the full potential of AI for teens and families and minimize the risks,” according to Common Sense. Continue reading OpenAI Partners with Common Sense Media on AI Guidelines
By
Paula ParisiDecember 11, 2023
Bluesky, the decentralized social media app spun out by Twitter co-founder Jack Dorsey that is poised to become a competitor to that platform’s successor, X, has passed the 2 million users milestone just 10 months after its launch. Although still in private beta, and accessible only through an invite code, Bluesky has been making headlines recently, first for what was criticized as lax content moderation, and also for announcing a public web interface that would allow anyone (and everyone) to view posts by the private network’s members, a policy decision that has reportedly been reversed. Continue reading Bluesky Adds Automated Moderation, Rethinks Web Visibility
By
Paula ParisiNovember 28, 2023
The United States, Britain and 16 other countries have signed a 20-page agreement on working together to keep artificial intelligence safe from bad actors, mandating collaborative efforts for creating AI systems that are “secure by design.” The 18 countries said they will aim to ensure companies that design and utilize AI develop and deploy it in a way that protects their customers and the public from abuse. The U.S. Department of Homeland Security’s Cybersecurity and Infrastructure Security Agency (CISA) and the United Kingdom’s National Cyber Security Centre (NCSC) jointly released the Guidelines for Secure AI System Development. Continue reading U.S., Britain and 16 Nations Aim to Make AI Secure by Design
By
Paula ParisiNovember 21, 2023
Germany, France and Italy have reached an agreement on a strategy to regulate artificial intelligence. The agreement comes on the heels of infighting among key European Union member states that has held up legislation and could potentially accelerate the broader EU negotiations. The three governments support binding voluntary commitments for large and small AI providers and endorse “mandatory self-regulation through codes of conduct” for foundation models while opposing “un-tested norms.” The paper underscores that “the AI Act regulates the application of AI and not the technology as such” and says the “inherent risks” are in the application, not the technology. Continue reading Germany, France and Italy Strike AI Deal, Pushing EU Forward
By
Paula ParisiNovember 2, 2023
OpenAI recently announced it is developing formal AI risk guidelines and assembling a team dedicated to monitor and study threat assessment involving imminent “superintelligence” AI, also called frontier models. Topics under review include the required parameters for a robust monitoring and prediction framework and how malicious actors might want to leverage stolen AI model weights. The announcement was made shortly prior to the Biden administration issuing an executive order requiring the major players in artificial intelligence to submit reports to the federal government assessing potential risks associated with their models. Continue reading OpenAI Creates a Team to Examine Catastrophic Risks of AI
By
Paula ParisiSeptember 26, 2023
AI tech startup Capsule is debuting a video editor it says can help enterprise teams achieve results “10x faster.” “Today, if you work at a large company — in marketing or comms, or maybe even sales or HR — creating even the simplest video can be daunting,” Capsule suggests. After querying more than 300 such enterprise teams about their pain points, Capsule focused on three areas of improvement: simplifying motion graphics, adhering to strict brand guidelines, and making the editing process more collaborative among teams across desktop and mobile, where apps are typically “siloed.” Continue reading AI Startup Capsule Creates Video Editor for Enterprise Teams
By
Paula ParisiAugust 18, 2023
After announcing a partnership with OpenAI last month, the Associated Press has issued guidelines for using generative AI in news reporting, urging caution in using artificial intelligence. The news agency has also added a new chapter in its widely used AP Stylebook pertaining to coverage of AI, a story that “goes far beyond business and technology” and is “also about politics, entertainment, education, sports, human rights, the economy, equality and inequality, international law, and many other issues,” according to AP, which says stories about AI should “show how these tools are affecting many areas of our lives.” Continue reading AP Is Latest Org to Issue Guidelines for AI in News Reporting
By
Paula ParisiAugust 17, 2023
OpenAI has shared instructions for training to handle content moderation at scale. Some customers are already using the process, which OpenAI says can reduce time for fine-tuning content moderation policies from weeks or months to mere hours. The company proposes its customization technique can also save money by having GPT-4 do the work of tens of thousands of human moderators. Properly trained, GPT-4 could perform moderation tasks more consistently in that it would be free of human bias, OpenAI says. While AI can incorporate biases from training data, technologists view AI bias as more correctable than human predisposition. Continue reading OpenAI: GPT-4 Can Help with Content Moderation Workload
By
Paula ParisiApril 15, 2022
TikTok has officially gone live with Effect House, the augmented reality tool that allows users to create AR filters and share them with the community. The ByteDance company has been testing the feature since last summer. Since then, at least 450 creators have used Effect House to create more than 1.5 billion videos that generated over 600 billion global views, according to TikTok. “Whether you’re teleporting into new worlds with Green Screen or freeze-framing with Time Warp Scan,” Effect House empowers expression “through a wide array of engaging and immersive formats.” Continue reading TikTok Launches Effect House for User-Generated AR Filters
By
Debra KaufmanJune 10, 2020
Tech blogger and app researcher Jane Manchun Wong discovered that Twitter is developing a new verification service. The original 2016 service placed a blue-and-white checkmark next to a verified personal account, brand or company. The service was halted in 2017 after it verified an account of Jason Kessler, an organizer of the Unite the Right rally in Charlottesville, Virginia. According to Twitter co-founder and chief executive Jack Dorsey, the company planned to expand the service in 2018 but didn’t have the bandwidth to do so. Continue reading Twitter Is Developing a New, Transparent Verification System