By
Debra KaufmanJanuary 10, 2024
In a CES conversation with Consumer Technology Association Senior Director of Regulatory Affairs Rachel Nemeth, FTC Commissioner Rebecca Slaughter discussed the Commission’s work on AI-enabled impersonation fraud, privacy, and right of repair. Taking the stage just after FDA Commissioner Robert Califf, Slaughter said she wanted to co-sign his plea for “full visibility of the work we do.” “We have responsibility to all Americans to make sure they are represented in the substance of the work we do,” she said. “The same is true for industries that want to reach all Americans.” Continue reading CES: FTC Commissioner Rebecca Slaughter on AI Regulation
By
Paula ParisiSeptember 11, 2023
Microsoft says it will assume legal responsibility for commercial customers who get sued for copyright infringement as a result of the company’s AI Copilot product services. A new initiative called the Copilot Copyright Commitment is designed to provide peace of mind to Microsoft business users as more copyright holders challenge the handling of protected works by the companies building AI models. “If a third party sues a commercial customer for copyright infringement for using Microsoft’s Copilots or the output they generate, we will defend the customer” and pay any resulting fees, including settlements, Microsoft says. Continue reading Microsoft Copilot AI Customers Shielded from Legal Exposure
By
Paula ParisiJune 1, 2023
Netflix says it will preemptively purge its UK library of films and TV shows that run afoul of new streamer regulations being implemented by the British government. UK ministers are calling on media regulator Ofcom to police streaming content as it does traditional broadcasters, which means video-on-demand platforms including Netflix and Amazon Prime Video could face fines of up to $310,000 per instance for hosting “harmful material.” Draft legislation that seeks to codify “due impartiality” for streamers as part of the proposed Media Bill were rebuked by Netflix as “nebulous” and potentially “onerous.” Continue reading Netflix Threatens to Purge Content to Avoid UK Streamer Bill
By
Paula ParisiMay 22, 2023
The U.S. Supreme Court opted to uphold the status quo as concerns Section 230 of the Communications Decency Act, opting in two separate cases not to strike down as unconstitutional the statutory provision that shield social media platforms from liability for user posts. The rulings, which involved Google, Twitter and Facebook, were greeted with relief by Big Tech. Although Congress has been vocal about paring back Section 230, a change in the law would be far less disruptive than the seismic aftershocks that would inevitably have been triggered by a reversal. Continue reading Supreme Court Sides with Social Media Platforms on Liability
By
Paula ParisiApril 10, 2023
Utah has become the first state to pass laws requiring social media platforms to obtain age verification before users can register. The law is designed to force social networks to enforce parental consent provisions. As of March 2024, companies including Facebook, Instagram, Snap, TikTok and Twitter will be required to secure proof of age for Utah users via a valid ID instead of just letting people type in their birth date at sign-up. While Utah is out front on the issue, nine other states have proposed legislation that includes age checks, most recently Arkansas. Continue reading Utah’s Social Media Law Requires Age Verification for Minors
By
Paula ParisiMarch 10, 2023
Bipartisan support is growing in the Senate for changes to Section 230, the part of the Communications Decency Act that grants federal immunity to social media platforms and other tech giants for content users post on their sites. At a combative Senate Judiciary Committee hearing Wednesday, lawmakers from both parties called for gutting major provisions of the legal liability shield, on which Big Tech has come to rely. Senators accused tech firms of putting profits over user safety and slammed the U.S. Supreme Court, which appeared to approach the matter with caution last month in Gonzalez v. Google. Continue reading Senate Message to Big Tech Is Expect Reform to Section 230
By
Paula ParisiSeptember 15, 2022
Twitter shareholders this week approved the $44 billion takeover bid by Elon Musk, voting the same day as whistleblower Peiter Zatko testified at a Senate Judiciary Committee hearing, telling lawmakers that the social media company’s leadership misled regulators about security failures. Senator Chuck Grassley (R-Iowa) was skeptical as to Twitter CEO Parag Agrawal keeping his job if Zatko’s allegations prove to be true, saying the executive “rejected this committee’s invitation by claiming that it would jeopardize Twitter’s ongoing litigation” with Musk. Twitter has categorically denied Zatko’s claims, which include foreign agents infiltrating Twitter’s workforce. Continue reading Twitter Investors Back Musk Offer as Whistleblower Testifies
By
Paula ParisiSeptember 14, 2022
The Electronic Privacy Information Center is calling on the Federal Trade Commission to create rules that would protect the digital privacy of teens. Human Rights Watch is asking the FTC for safeguards to prevent education companies from selling minors’ personal information to data brokers and a ban on data-driven advertising targeting children. Both groups were represented at the FTC’s first public forum to explore adopting new rules around data collection and AI training on personal data. Practices the FTC is examining include the timeframe in which companies can retain consumer data and mandating audits of automated decision-making systems. Continue reading FTC Explores New Rules Surrounding Data Collection and AI
By
Paula ParisiSeptember 8, 2022
The EU’s draft AI Act is causing quite a stir, particular as it pertains to regulating general-purpose artificial intelligence, including guidelines for open source developers that specify procedures for accuracy, risk management, transparency, technical documentation and data governance, well as cybersecurity. The first law on AI by a major regulator anywhere, the proposed AI Act seeks to promote “trustworthy AI,” but some are critical that as written the legislation could hurt open efforts to develop AI systems. The EU is seeking industry input as the proposal heads for a vote this fall. Continue reading EU’s AI Act Could Present Dangers for Open-Source Coders
By
Paula ParisiAugust 19, 2022
In a major reversal, Australia’s highest court found Google not liable for defamatory content linked through search results, ruling that the Alphabet subsidiary “was not a publisher” of the objectionable content. Google was sued for defamation for a 2004 article appearing in its search engine results, and both the trial court and a circuit court of appeals held Google responsible as a “publisher” because it was instrumental in circulating the contents of the offending article. The lower courts rejected Google’s reliance on the statutory and common law defenses of innocent dissemination and qualified privilege. Continue reading Australia’s Highest Court Rules Google Links Not Defamatory
By
Paula ParisiJune 16, 2022
A proposed Massachusetts ballot initiative designating gig drivers as independent contractors was nixed by a state court that deemed it an attempt to avoid liability by companies like Uber and Lyft in the event of accident or crime. The Tuesday ruling effectively halted a $17.8 million campaign in support of a bill the Massachusetts Supreme Judicial Court said violates the State Constitution, with hidden language excepting drivers from being “an employee or agent” of a gig company. The move is the latest in a series of skirmishes between gig companies and local governments. Continue reading Massachusetts Court Objects to Gig Worker Ballot Measure
By
Paula ParisiMarch 18, 2022
Big Tech executives may find themselves facing UK prosecution or jail time sooner than expected as the target date for Online Safety Bill (OSB) enforcement collapses to within two months of becoming law, rather than the two years originally proposed. Several new offenses have been added to the bill, including criminal liability for those who destroy evidence, fail to cooperate with Ofcom investigations or impede regulatory inspections. Facebook, Instagram, TikTok, Twitter and YouTube can all expect audits for the sort of harmful content the OSB seeks to address. Continue reading Criminal Liability Will Be Added to the UK’s Online Safety Bill
By
Paula ParisiDecember 15, 2021
British lawmakers are seeking “major changes” to the forthcoming Online Safety Bill that cracks down on Big Tech but apparently does not go far enough. Expansions under discussion include legal consequences for tech firms and new rules for online fraud, advertising scams and deepfake (AI-generated) adult content. Comparing the Internet to the “Wild West,” Damian Collins, chairman of the joint committee that issued the report, went so far as to suggest corporate directors be subject to criminal liability if their companies withhold information or fail to comply with the act. Continue reading UK Lawmakers Are Taking Steps to Toughen Online Safety Bill
By
Paula ParisiDecember 14, 2021
The U.S. Senate has introduced the bipartisan Platform Accountability and Transparency Act (PATA), which if passed into law would allow independent researchers to sue Big Tech for failing to provide requested data. The move follows last week’s Instagram hearing, where leaked internal research suggested the platform’s negative effects on the mental health of teens. On December 6, an international coalition of more than 300 scientists sent an open letter to Mark Zuckerberg — CEO of Meta Platforms, the company that owns Instagram and Facebook — requesting the social behemoth voluntarily share research. Continue reading Senate Wants Social Firms to Pay for Holding Back Research
By
Paula ParisiDecember 3, 2021
The U.S. House of Representatives is signaling intent to proceed with legislation to scale back the Section 230 liability shield for Big Tech. The move follows a frontal assault on Australia’s version of the law by the Parliament and global saber-rattling against protections that prevent social platforms being held legally accountable for user-posted content that harms others. At a Wednesday hearing on various Section 230 bills, House Energy and Commerce Committee chairman Frank Pallone (D-New Jersey) said that while the protections were vital to Internet growth, they have resulted in anti-social behavior. Continue reading Government Questions Liability Shield Offered by Section 230