By
Paula ParisiNovember 28, 2023
The United States, Britain and 16 other countries have signed a 20-page agreement on working together to keep artificial intelligence safe from bad actors, mandating collaborative efforts for creating AI systems that are “secure by design.” The 18 countries said they will aim to ensure companies that design and utilize AI develop and deploy it in a way that protects their customers and the public from abuse. The U.S. Department of Homeland Security’s Cybersecurity and Infrastructure Security Agency (CISA) and the United Kingdom’s National Cyber Security Centre (NCSC) jointly released the Guidelines for Secure AI System Development. Continue reading U.S., Britain and 16 Nations Aim to Make AI Secure by Design
By
Paula ParisiOctober 25, 2023
OpenAI is developing an AI tool that can identify images created by artificial intelligence — specifically those made in whole or part by its Dall-E 3 image generator. Calling it a “provenance classifier,” company CTO Mira Murati began publicly discussing the detection app last week but said not to expect it in general release anytime soon. This, despite Murati’s claim it is “almost 99 percent reliable.” That is still not good enough for OpenAI, which knows there is much at stake when the public perception of artists’ work can be impacted by a filter applied by AI, which is notoriously capricious. Continue reading OpenAI Developing ‘Provenance Classifier’ for GenAI Images
By
George GerbaOctober 12, 2023
Earlier this year, Google introduced support for passkeys as part of a larger initiative to improve security and eventually eliminate the need for passwords. Since the launch, consumers have begun using passkeys across Google apps such as Search, YouTube and Maps. As the next step in establishing “a simpler and more secure way to sign into your accounts online,” and following positive feedback from early users, the company is offering passkeys as the default option across personal accounts. When signing into accounts, users will receive prompts for creating passkeys. Additionally, Google account settings will feature a toggle that reads “skip password when possible.” Continue reading Google Makes Passkeys Default Option on Personal Accounts
By
Paula ParisiSeptember 22, 2023
OpenAI has released the DALL-E 3 generative AI imaging platform in research preview. The latest iteration features more safety options and integrates with OpenAI’s ChatGPT, currently driven by the now seasoned large language model GPT-4. That is the ChatGPT version to which Plus subscribers and enterprise customers have access — the same who will be able to preview DALL-E 3. The free chatbot is built around GPT-3.5. OpenAI says GPT-4 makes for better contextual understanding by DALL-E, which even in version 2 evidenced some glaring comprehension glitches. Continue reading OpenAI’s Latest Version of DALL-E Integrates with ChatGPT
By
Paula ParisiSeptember 19, 2023
The UK’s Competition and Markets Authority has issued a report featuring seven proposed principles that aim to “ensure consumer protection and healthy competition are at the heart of responsible development and use of foundation models,” or FMs. Ranging from “accountability” and “diversity” to “transparency,” the principles aim to “spur innovation and growth” while implementing social safety measures amidst rapid adoption of apps including OpenAI’s ChatGPT, Microsoft 365 Copilot, Stability AI’s Stable Diffusion. The transformative properties of FMs can “have a significant impact on people, businesses, and the UK economy,” according to the CMA. Continue reading UK’s Competition Office Issues Principles for Responsible AI
By
Paula ParisiSeptember 19, 2023
California lawmakers have put data brokers on notice. A bill known as the Delete Act would allow consumers to require all such information peddlers to delete their personal information with a single request. The bill defines “data brokers” as any number of businesses that collect and sell people’s personal information, including residential address, marital status and purchases. Both houses last week passed the proposed legislation — Senate Bill 362 — and it now heads to Governor Newsom’s desk. If he signs it, the new law will go into effect in January 2026. Continue reading California Plans to Protect Consumer Privacy with Delete Act
By
Paula ParisiSeptember 8, 2023
California Governor Gavin Newsom signed an executive order for state agencies to study artificial intelligence and its impact on society and the economy. “We’re only scratching the surface of understanding what GenAI is capable of,” Newsom suggested. Recognizing “both the potential benefits and risks these tools enable,” he said his administration is “neither frozen by the fears nor hypnotized by the upside.” The move was couched as a “measured approach” that will help California “focus on shaping the future of ethical, transparent, and trustworthy AI, while remaining the world’s AI leader.” Continue reading Governor Newsom Orders Study of GenAI Benefits and Risks
By
Paula ParisiAugust 23, 2023
A draft agreement said to have been presented by the U.S. government to ByteDance that would let TikTok avoid a federal ban seeks “near unfettered access” to company data and “unprecedented control” over platform functions. The nearly 100-page document, reported on this week, seeks control federal officials don’t have over other media outlets — social or otherwise — raising domestic concerns about government overreach. The draft dates to summer 2022. It is not known whether it has been updated or if the secretive negotiations between ByteDance and the Committee on Foreign Investment in the United States (CFIUS) have since continued. Continue reading Plans for TikTok Containment Would Give Feds Broad Power
By
Paula ParisiAugust 21, 2023
Illinois has become the first state in the nation to pass legislation protecting children who are social media influencers. Beginning in July 2024, children under 16 who appear in monetized video content online will have a legal right to compensation for their work, even if that means litigating against their parents. “The rise of social media has given children new opportunities to earn a profit,” Illinois Senator David Koehler said about the bill he sponsored. “Many parents have taken this opportunity to pocket the money, while making their children continue to work in these digital environments. Continue reading Illinois Law Protecting Child Vloggers Will Take Effect in 2024
By
Paula ParisiAugust 17, 2023
OpenAI has shared instructions for training to handle content moderation at scale. Some customers are already using the process, which OpenAI says can reduce time for fine-tuning content moderation policies from weeks or months to mere hours. The company proposes its customization technique can also save money by having GPT-4 do the work of tens of thousands of human moderators. Properly trained, GPT-4 could perform moderation tasks more consistently in that it would be free of human bias, OpenAI says. While AI can incorporate biases from training data, technologists view AI bias as more correctable than human predisposition. Continue reading OpenAI: GPT-4 Can Help with Content Moderation Workload
By
Paula ParisiJuly 31, 2023
The Senate has cleared two children’s online safety bills despite pushback from civil liberties groups that say the digital surveillance used to monitor behavior will result in an Internet less safe for kids. The Kids Online Safety Act (KOSA) and the Children and Teens’ Online Privacy Protection Act (COPPA 2.0) are intended to address a mental health crisis experts blame in large part on social media, but critics say the bills could cause more harm than good by forcing social media firms to collect more user data as part of enforcement. The bills — which cleared the Senate Commerce Committee by unanimous vote — are also said to reduce access to encrypted services. Continue reading Government Advances Online Safety Legislation for Children
By
Paula ParisiJuly 27, 2023
Advancing President Biden’s push for responsible development of artificial intelligence, top AI firms including Anthropic, Google, Microsoft and OpenAI have launched the Frontier Model Forum, an industry forum that will work collaboratively with outside researchers and policymakers to implement best practices. The new group will focus on AI safety, research into its risks, and disseminating information to the public, governments and civil society. Other companies involved in building bleeding-edge AI models will also be invited to join and participate in technical evaluations and benchmarks. Continue reading Major Tech Players Launch Frontier Model Forum for Safe AI
By
Paula ParisiJuly 24, 2023
President Biden has secured voluntary commitments from seven leading AI companies who say they will support the executive branch goal of advancing safe, secure and transparent development of artificial intelligence. Executives from Amazon, Anthropic, Google, Inflection, Meta, Microsoft and OpenAI convened at the White House on Friday to support the accord, which some criticized as a half measure, claiming the companies have already embraced independent security testing and a commitment to collaborating with each other and the government. Biden stressed the need to deploy AI altruistically, “to help address society’s greatest challenges.” Continue reading Top Tech Firms Support Government’s Planned AI Safeguards
By
Paula ParisiJune 14, 2023
A bill passed by the Louisiana State Legislature that bans minors from creating social media accounts without parental consent is the latest in a string of legal measures that take aim at the online world to combat a perceived mental health crisis among America’s youth. Utah also recently passed a law requiring consent of a parent or guardian when anyone under 18 wants to create a social account. And California now mandates some sites default to the highest privacy for minor accounts. The Louisiana legislation stands out as extremely restrictive, encompassing multiplayer games and video-sharing apps. Continue reading Louisiana Approves Parental Consent Bill for Online Accounts
By
Paula ParisiJune 1, 2023
Twitter is emphasizing crowdsourced moderation. The launch of Community Notes for images in posts seeks to address instances where morphed or AI-generated images are posted. The idea is to expose altered content before it goes viral, as did the image of Pope Francis wearing a Balenciaga puffy coat in March and the fake image of an explosion at the Pentagon in May. Twitter says Community Notes about an image will appear with “recent and future” posts containing the graphic in question. Currently in the test phase, the feature works with tweets featuring a single image. Continue reading Twitter Community Notes Aim to Curb Impact of Fake Images