By
Paula ParisiNovember 5, 2024
The Massachusetts Institute of Technology has come up what it thinks is a better way to teach robots general purpose skills. Derived from LLM techniques, the method provides robot intelligence access to an enormous amount of data at once, rather than exposing it to individual programs for specific tasks. Faster and more cost efficient, the approach has been referred to as a “brute force” approach to problem-solving, and machine learners have taken to it in lieu of individualized, task-specific “imitation learning.” Early tests show it outperforming traditional training by more than 20 percent under simulation and real-world conditions. Continue reading MIT Intros LLM-Inspired Teacher for General Purpose Robots
By
Paula ParisiOctober 2, 2024
AI startup Liquid, founded by alums of MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL), has released its first models. Called Liquid Foundation Models, or LFMs, the multimodal family approaches “intelligence” differently than the pre-trained transformer models that dominate the field. Instead, the LFMs take a path of “first principles,” which MIT describes as “the same way engineers build engines, cars, and airplanes,” explaining that the models are large neural networks with computational units “steeped in theories of dynamic systems, signal processing and numeric linear algebra.” Continue reading MIT Spinoff Liquid Eschews GPTs for Its Fluid Approach to AI
By
Paula ParisiAugust 19, 2024
The list of potential risks associated with artificial intelligence continues to grow. “Global AI adoption is outpacing risk understanding,” warns the MIT Computer Science & Artificial Intelligence Laboratory (CSAIL), which has joined with the MIT multidisciplinary computer group FutureTech to compile the AI Risk Repository, a “living database” of more than 700 unique risks extracted across 43 source categories. Organized by cause, classifying “how, when and why these risks occur,” the repository is comprised of seven risk domains (for example, “misinformation”) and 23 subdomains (such as “false or misleading information”). Continue reading MIT’s AI Risk Assessment Database Debuts with 700 Threats
By
ETCentric StaffMarch 28, 2024
Researchers from the Massachusetts Institute of Technology and Adobe have unveiled a new AI acceleration tool that makes generative apps like DALL-E 3 and Stable Diffusion up to 30x faster by reducing the process to a single step. The new approach, called distribution matching distillation, or DMD, maintains or enhances image quality while greatly streamlining the process. Theoretically, the technique “marries the principles of generative adversarial networks (GANs) with those of diffusion models,” consolidating “the hundred steps of iterative refinement required by current diffusion models” into one step, MIT PhD student and project lead Tianwei Yin says. Continue reading New Tech from MIT, Adobe Advances Generative AI Imaging
By
ETCentric StaffMarch 11, 2024
Artificial intelligence stakeholders are calling for safe harbor legal and technical protections that will allow them access to conduct “good-faith” evaluations of various AI products and services without fear of reprisal. More than 300 researchers, academics, creatives, journalists and legal professionals had as of last week signed an open letter calling on companies including Meta Platforms, OpenAI and Google to allow access for safety testing and red teaming of systems they say are shrouded in opaque rules and secrecy despite the fact that millions of consumers are already using them. Continue reading Researchers Call for Safe Harbor for the Evaluation of AI Tools
By
Phil LelyveldJanuary 16, 2024
A CES session on government AI policy featured an address by Assistant Secretary of Commerce for Communications and Information Alan Davidson (who is also administrator of the National Telecommunications and Information Administration), followed by a discussion of government activities, and finally industry perspective from execs at Google, Microsoft and Xperi. Davidson studied at MIT under nuclear scientist Professor Philip Morrison, who spent the first part of his career developing the atomic bomb and the second half trying to stop its use. That lesson was not lost on Davidson. At NTIA they are working to ensure “that new technologies are developed and deployed in the service of people and in the service of human progress.” Continue reading CES: Panelists Weigh Need for Safe AI That Serves the Public
By
Paula ParisiDecember 12, 2023
Apple is emphasizing the importance of data encryption with a report that shows personal data breaches up 300 percent between 2013 and 2022. In the past two years, more than 2.6 billion personal records have been exposed, according to the newly released study “The Continued Threat to Personal Data: Key Factors Behind the 2023 Increase.” The report, created by Dr. Stuart Madnick, the founding director of Cybersecurity at MIT Sloan, cites increasing dependence on cloud computing as the main factor for the surge. U.S. data intrusions through Q3 of this year are 20 percent higher than all 12 months of 2022. Continue reading Apple Says U.S. Data Breaches Up by More Than 20 Percent
By
Paula ParisiNovember 2, 2023
OpenAI recently announced it is developing formal AI risk guidelines and assembling a team dedicated to monitor and study threat assessment involving imminent “superintelligence” AI, also called frontier models. Topics under review include the required parameters for a robust monitoring and prediction framework and how malicious actors might want to leverage stolen AI model weights. The announcement was made shortly prior to the Biden administration issuing an executive order requiring the major players in artificial intelligence to submit reports to the federal government assessing potential risks associated with their models. Continue reading OpenAI Creates a Team to Examine Catastrophic Risks of AI
By
Paula ParisiSeptember 25, 2023
As creators embrace artificial intelligence to juice creativity, TikTok is launching a tool that helps them label their AI-generated content while also beginning to test “ways to label AI-generated content automatically.” “AI enables incredible creative opportunities, but can potentially confuse or mislead viewers,” TikTok said in announcing labels that can apply to “any content that has been completely generated or significantly edited by AI,” including video, photographs, music and more. The platform also touted a policy that “requires people to label AI-generated content that contains realistic images, audio or video, in order to help viewers contextualize.” Continue reading TikTok Creates New Tools for Labeling Content Created by AI
By
Paula ParisiSeptember 13, 2023
Google is establishing a $20 million fund to promote responsible AI through its charitable arm, Google.org. The investment will provide grants to academics and think tanks as part of the company’s new Digital Futures Project, announced on the eve of today’s private meeting between Congress and AI-focused tech giants. “AI has the potential to make our lives easier and address some of society’s most complex challenges — like preventing disease, making cities work better and predicting natural disasters. But it also raises questions about fairness, bias, misinformation, security and the future of work,” Google said. Continue reading Google Digital Futures Project Pumps $20M into Responsible AI
By
Paula ParisiJune 23, 2023
Researchers at MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) have introduced a computer vision system that combines image recognition and image generation technology into one training model instead of two. The result, MAGE (short for MAsked Generative Encoder) holds promise for a wide variety of use cases and is expected to reduce costs through unified training, according to the team. “To the best of our knowledge, this is the first model that achieves close to state-of-the-art results for both tasks using the same data and training paradigm,” the researchers said. Continue reading MAGE AI Unifies Generative and Recognition Image Training
By
Paula ParisiJune 13, 2023
Sam Altman continues to call for coordinated international regulation of artificial intelligence. The OpenAI co-founder and CEO visited Seoul this past weekend to meet with South Korean President Yoon Suk Yeol, who issued a statement saying it is important to act “with a sense of speed” in establishing international standards or face unwanted “side effects.” Altman also virtually delivered a keynote address to Chinese AI researchers at an annual conference hosted by the Beijing Academy of Artificial Intelligence, calling on China to participate in global rulemaking. Continue reading Altman Calls on China to Participate in Global AI Rulemaking
By
Paula ParisiApril 26, 2023
Generative AI has become a buzzword in the business community, resulting in 65 percent of executives in a recent KPMG survey saying they believe the technology will have a high or extremely high impact on their organization in the next three to five years. Yet most say they are unprepared for immediate adoption, with 60 percent estimating they are 12 to 24 months from implementing their first generative AI solution. Fewer than half of respondents say they have the right technology, talent, and governance in place to successfully implement generative AI. Continue reading Enterprise Anticipates AI Impact but Few Execs Are Prepared
By
Paula ParisiMarch 15, 2023
ChatGPT “occupational exposure” is a new area of study for jobs vulnerable to replacement by AI chatbots with strong language skills. A Princeton University survey suggests telemarketers, history teachers and sociologists are among those at risk, while physical laborers needn’t worry right now. A second study, by MIT graduate students, says language-dependent jobs are not destined for replacement, but are in for an AI assist. Asked to complete office tasks like writing press releases, emails and short reports, those using ChatGPT were 37 percent faster, and produced superior results. Continue reading Generative AI May Improve Knowledge Workers’ Productivity
By
Rob ScottDecember 19, 2022
Facing backlash against his executive leadership, Twitter’s new owner and CEO, billionaire Elon Musk, conducted an informal 12-hour poll over the weekend asking users of the popular social media platform whether he should keep his new position. “Should I step down as head of Twitter?” the controversial executive asked. “I will abide by the results of this poll.” After more than 17.5 million responses, the results indicate that a majority of users believe Musk should step down from his post (57.5 percent voted in the affirmative). As of press time, it remains unclear what action Musk may take in light of the poll results. Continue reading Twitter Users Vote in Favor of Musk Stepping Down as CEO