OpenAI Announces $200 Monthly Subscription for ChatGPT Pro

OpenAI has launched ChatGPT Pro, a $200 per month subscription plan that provides unlimited access to the full version of o1, its new large reasoning model, and all other OpenAI models. The toolkit includes o1-mini, GPT-4o and Advanced Voice. It also includes the new o1 pro mode, “a version of o1 that uses more compute to think harder and provide even better answers to the hardest problems,” OpenAI explains, describing the high-end subscription plan as a path to “research-grade intelligence” for a way for scientists, engineers, enterprise, academics and others who use AI to accelerate productivity. Continue reading OpenAI Announces $200 Monthly Subscription for ChatGPT Pro

Lightricks LTX Video Model Impresses with Speed and Motion

Lightricks has released an AI model called LTX Video (LTXV) it says generates five seconds of 768 x 512 resolution video (121 frames) in just four seconds, outputting in less time than it takes to watch. The model can run on consumer-grade hardware and is open source, positioning Lightricks as a mass market challenger to firms like Adobe, OpenAI, Google and their proprietary systems. “It’s time for an open-sourced video model that the global academic and developer community can build on and help shape the future of AI video,” Lightricks co-founder and CEO Zeev Farbman said. Continue reading Lightricks LTX Video Model Impresses with Speed and Motion

Google Unveils New Updates to Its AI-Powered NotebookLM

Google has updated its AI assistant, NotebookLM, allowing the AI note-taking and research tool to find summaries of audio files and YouTube videos. First released at the Google I/O developer conference in 2023, NotebookLM even creates sharable AI-generated audio discussions and podcasts. It allows users to upload file formats including PDFs, Google Docs, Google Slides and websites. The items, including text, can be stored in shareable “notebooks,” organizing material in a central location, and users can ask Google’s Gemini AI questions about the notebook material. Initially embraced by students and educators, it has become equally popular among business users. Continue reading Google Unveils New Updates to Its AI-Powered NotebookLM

AWS Transfers OpenSearch Stewardship to Linux Foundation

Amazon is transferring its OpenSearch platform to the Linux Foundation’s new OpenSearch Software Foundation. By handing a third-party the open-source project it has developed internally since 2021, Amazon hopes to accelerate collaboration in data-driven search and analytics, an area of focus due to the proliferation of model training. Not to be confused with commercial search (Google, Bing), engines like OpenSearch are geared toward enterprise and academia. Because it is licensed under Apache 2.0, OpenSearch is a viable starting point for organizations that customize internal platforms for searching, monitoring and analyzing large volumes of data. Continue reading AWS Transfers OpenSearch Stewardship to Linux Foundation

MIT’s AI Risk Assessment Database Debuts with 700 Threats

The list of potential risks associated with artificial intelligence continues to grow. “Global AI adoption is outpacing risk understanding,” warns the MIT Computer Science & Artificial Intelligence Laboratory (CSAIL), which has joined with the MIT multidisciplinary computer group FutureTech to compile the AI Risk Repository, a “living database” of more than 700 unique risks extracted across 43 source categories. Organized by cause, classifying “how, when and why these risks occur,” the repository is comprised of seven risk domains (for example, “misinformation”) and 23 subdomains (such as “false or misleading information”). Continue reading MIT’s AI Risk Assessment Database Debuts with 700 Threats

UK Launches New Open-Source Platform for AI Safety Testing

The UK AI Safety Institute announced the availability of its new Inspect platform designed for the evaluation and testing of artificial intelligence tech in order to help develop safe AI models. The Inspect toolset enables testers — including worldwide researchers, government agencies, and startups — to analyze the specific capabilities of such models and establish scores based on various criteria. According to the Institute, the “release comes at a crucial time in AI development, as more powerful models are expected to hit the market over the course of 2024, making the push for safe and responsible AI development more pressing than ever.” Continue reading UK Launches New Open-Source Platform for AI Safety Testing

Researchers Call for Safe Harbor for the Evaluation of AI Tools

Artificial intelligence stakeholders are calling for safe harbor legal and technical protections that will allow them access to conduct “good-faith” evaluations of various AI products and services without fear of reprisal. More than 300 researchers, academics, creatives, journalists and legal professionals had as of last week signed an open letter calling on companies including Meta Platforms, OpenAI and Google to allow access for safety testing and red teaming of systems they say are shrouded in opaque rules and secrecy despite the fact that millions of consumers are already using them. Continue reading Researchers Call for Safe Harbor for the Evaluation of AI Tools

Google Targets Global Security with AI Cyber Defense Initiative

Google has unveiled a new policy, the AI Cyber Defense Initiative, designed to harness the power of artificial intelligence to improve global cybersecurity defenses. The proposed policy aims to counteract rapidly evolving threats by using AI to improve threat detection, automate vulnerability management and enhance incident response effectiveness. The Alphabet company introduced its new plan at the Munich Security Conference, where it also announced it has a pool of $2 million to award businesses and academic institutions for research initiatives involving large language models, code verification and other AI uses for cyber offense and defense. Continue reading Google Targets Global Security with AI Cyber Defense Initiative

OpenAI Partners with Common Sense Media on AI Guidelines

As parents and educators grapple with figuring out how AI will fit into education, OpenAI is preemptively acting to help answer that question, teaming with learning and child safety group Common Sense Media on informational material and recommended guidelines. The two will also work together to curate “family-friendly GPTs” for the GPT Store that are “based on Common Sense ratings and standards,” the organization said. The partnership aims “to help realize the full potential of AI for teens and families and minimize the risks,” according to Common Sense. Continue reading OpenAI Partners with Common Sense Media on AI Guidelines

Schumer Shares Plan for SAFE AI Senate Listening Sessions

Senate Majority Leader Chuck Schumer unveiled his approach toward regulating artificial intelligence, beginning with nine listening sessions to explore topics including AI’s impact on the job market, copyright, national security and “doomsday scenarios.” Schumer’s plan — the SAFE (Security, Accountability, Foundations, Explainability) Innovation framework — isn’t proposed legislation, but a discovery roadmap. Set to begin in September, the panels will draw on members of industry, academia and civil society. “Experts aren’t even sure which questions policymakers should be asking,” said Schumer of the learning curve. “In many ways, we’re starting from scratch.” Continue reading Schumer Shares Plan for SAFE AI Senate Listening Sessions

Report: Enterprise Supplants Academia as Driving Force of AI

After many years of academia leading the way in the development of artificial intelligence, the tides have shifted and industry has taken over, according to the 2023 AI Index, a report created by Stanford University with help from companies including Google, Anthropic and Hugging Face. “In 2022, there were 32 significant industry-produced machine learning models compared to just three produced by academia,” the report says. The shift in influence is attributed mainly to the large resource demands — in staff, computing power and training data — required to create state of the art AI systems. Continue reading Report: Enterprise Supplants Academia as Driving Force of AI

CES: Dimension X Demos Bonfire Virtual Story Creation Tools

For the most part, exhibitors in the Gaming and Metaverse areas at CES 2023 didn’t touch on the latent problems with consumer adoption of the metaverse. While worlds like “Fortnite” and “Roblox” that draw consistently high MAUs do so because they offer fun mechanics, the many metaverse platforms on exhibit generally did not provide compelling reasons for why companies — much less consumers — should spend time in their worlds. Dimension X’s booth was a standout on the floor, however, as it showcased Bonfire, a soon-to-be-released tool to enable the seamless creation of narrative mechanics within virtual worlds. Continue reading CES: Dimension X Demos Bonfire Virtual Story Creation Tools

NFTs Are Poised to Move Beyond Arts into Academia, Health

As NFTs work their way into the social fabric via digital art and collectibles, there is speculation that their usefulness is only beginning to be understood. While non-fungible tokens have gained popularity due to their use in illustration, music, entertainment, gaming and sports, as a medium they’re still in their infancy. As units of data saved onto a blockchain, the provenance of every NFT is trackable, substantiating ownership and authenticity. As such, there is interest in using them for everything from educational credentialing and documenting medical treatment to automotive applications and philanthropic fundraising. Continue reading NFTs Are Poised to Move Beyond Arts into Academia, Health

Facebook Apologizes for Providing Researchers Flawed Data

Facebook apologized to researchers this week for data released years ago but only recently outed as inaccurately representing how U.S. users interact with posts and links. Reaching out via email and on a conference call with 47 people, the social media giant attempted to mitigate the harm caused by academics and analysts who have already spent about two years studying what they now say, and Facebook seems to agree, is flawed data about how misinformation spreads on its platform. The problem was identified as Facebook having underreported by about half the number of U.S. users and their data. Continue reading Facebook Apologizes for Providing Researchers Flawed Data

ETC Executive Coffee: A Talk with Vubiquity’s Darcy Antonellis

During the seventh installment of ETC@USC’s Executive Coffee with… series, Vubiquity CEO Darcy Antonellis posed an intriguing question for USC students: “If you were asked to create the educational system of the future, what would learning look like for college-age students or post-grads such as yourself?” Graduate and undergraduate students from the USC School of Cinematic Arts and the Iovine and Young Academy participated in this lively November 4 discussion. Students expressed interest in online schedules, networking meet-ups, collaboration and support, the technology gap, group-based learning and more. Continue reading ETC Executive Coffee: A Talk with Vubiquity’s Darcy Antonellis