By
Paula ParisiDecember 17, 2021
Advances in language comprehension for artificial intelligence are issuing from San Francisco’s OpenAI and London-based DeepMind. OpenAI, which has been working on large language models, says it now lets customers fine-tune its GPT-3 models using their own custom data, while the Alphabet-owned DeepMind is talking-up Gopher, a 280-billion parameter deep-learning language model that has scored impressively on tests. Sophisticated language models have the ability to comprehend natural language, as well as predict and generate text, requirements for creating advanced AI systems that can dispense information and advice or that are required to follow instructions. Continue reading Advances by OpenAI and DeepMind Boost AI Language Skills
By
Debra KaufmanAugust 17, 2021
OpenAI’s Codex, an AI system that translates natural language into code, was released via an API in private beta. Codex, trained on billions of lines of public code, can turn plain English commands into 12+ programming languages and also powers GitHub service Copilot that suggests whole lines of code within Microsoft Visual Studio and other development environments. OpenAI explained that Codex will be offered for free during an “initial period,” and invites “businesses and developers to build on top of it through the API.”
Continue reading OpenAI Debuts Tool to Translate Natural Language into Code
By
Debra KaufmanMay 27, 2021
OpenAI unveiled a $100 million OpenAI Startup Fund to fund early-stage companies pursuing ways that AI can have a “transformative” impact on healthcare, education, climate change and other fields. OpenAI chief executive Sam Altman said the Fund will make “big, early bets” on no more than 10 such companies. OpenAI, with funding from Microsoft and others, will manage the Fund. Selected projects will get “early access” to future OpenAI systems, support from OpenAI’s team and credits for Microsoft Azure. Continue reading OpenAI and Microsoft Introduce $100 Million AI Startup Fund
By
Debra KaufmanMarch 31, 2021
OpenAI’s GPT-3, the much-noted AI text generator, is now being used in 300+ apps by “tens of thousands” of developers and generating 4.5 billion words per day. Meanwhile, a collective of researchers, EleutherAI is building transformer-based language models with plans to offer an open source, GPT-3-sized model to the public for free. The non-profit OpenAI has an exclusivity deal with Microsoft that gives the tech giant unique access to GPT-3’s underlying code. But OpenAI has made access to its general API available to all comers, who then build services on top of it. Continue reading OpenAI and EleutherAI Foster Open-Source Text Generators
By
Debra KaufmanMarch 2, 2021
OpenAI’s natural language processing (NLP) model GPT-3 offers 175 billion parameters, compared with its predecessor, GPT-2’s mere 1.5 billion parameters. The result of GPT-3’s immense size has enabled it to generate human-like text based on only a few examples of a task. Now, many users have gained access to the API, and the result has been some interesting use cases and applications. But the ecosystem is still nascent and how it matures — or whether it’s superseded by another NLP model — remains to be seen. Continue reading GPT-3: New Applications Developed for OpenAI’s NLP Model
By
Phil LelyveldJanuary 15, 2021
Two CES 2021 panels addressed the current state and anticipated advances in quantum computing, which is already being applied to problems in business, academia and government. However, the hardware is not as stable and robust as people would like, and the algorithms are not yet up to the task to solve the problems that many researchers envision for them. This has not stopped entrepreneurs, major corporations and governments from dedicated significant resources in R&D and implementations, nor from VCs and sovereign funds making major bets on who the winners will be. Continue reading CES: Sessions Examine the Potential of Quantum Computing
By
Debra KaufmanJanuary 7, 2021
OpenAI unveiled DALL-E, which generates images from text using two multimodel AI systems that leverage computer vision and NLP. The name is a reference to surrealist artist Salvador Dali and Pixar’s animated robot WALL-E. DALL-E relies on a 12-billion parameter version of GPT-3. OpenAI demonstrated that DALL-E can manipulate and rearrange objects in generated imagery and also create images from scratch based on text prompts. It has stated that it plans to “analyze how models like DALL·E relate to societal issues.” Continue reading OpenAI Unveils AI-Powered DALL-E Text-to-Image Generator
By
Debra KaufmanDecember 23, 2020
San Francisco-based Fable Studio, a VR studio that won an Emmy Award for its “Wolves in the Walls” project, has debuted its first efforts in creating conversational AI virtual beings. Charlie and Beck, two characters that can converse as if they were real people, are Fable Studio’s bet in the future of such virtual beings for entertainment and even companionship. Its first AI being was Lucy, an 8-year-old girl, who starred in “Wolves in the Walls” and is now a standalone online character after the company debuted her in alpha tests last month. Continue reading Fable Studio Bets on a Future with AI-Powered Virtual Beings
By
ETCentricNovember 9, 2020
To fully examine the inner workings and potential impact of deep learning language model GPT-3 on media, ETC’s project on AI & Neuroscience in Media is hosting a virtual event on November 10 from 11:00 am to 12:15 pm. RSVP here to join moderator Yves Bergquist of ETC@USC and presenter Dr. Mark Riedl of Georgia Tech as they present, “Machines That Can Write: A Deep Look at GPT-3 and its Implications for the Industry.” The launch last June of OpenAI’s GPT-3, a language model that uses deep learning to generate human-like text, has raised many questions in the creative community and the world at large. Continue reading Virtual Event: GPT-3 and Its Implications for the M&E Industry
By
Debra KaufmanSeptember 24, 2020
Microsoft struck a deal with AI startup OpenAI to be the exclusive licensee of language comprehension model GPT-3. According to Microsoft EVP Kevin Scott, the deal is an “incredible opportunity to expand our Azure-powered AI platform in a way that democratizes AI technology.” Among potential uses are “aiding human creativity and ingenuity in areas like writing and composition, describing and summarizing large blocks of long-form data (including code), converting natural language to another language.” Continue reading Microsoft Inks Deal with OpenAI for Exclusive GPT-3 License
By
Debra KaufmanAugust 25, 2020
In the not-so-distant future there will likely be services that allow the user to choose plots, characters and locations that are then fed into an AI-powered transformer with the result of a fully customized movie. The idea of using generative artificial intelligence to create content goes back to 2015’s computer vision program DeepDream, thanks to Google engineer Alexander Mordvintsev. Bringing that fantasy closer to reality is the AI system GPT-3 that creates convincingly coherent and interactive writing, often fooling the experts. Continue reading AI-Powered Movies in Progress, Writing Makes Major Strides
By
Debra KaufmanAugust 13, 2020
FireEye data scientist Philip Tully showed off a convincing deepfake of Tom Hanks he built with less than $100 and open-source code. Until recently, most deepfakes have been low quality and pretty easy to spot. FireEye demonstrated that now, even those with little AI expertise can use published AI code and a bit of fine-tuning to create much more convincing results. But many experts believe deepfake text is a bigger threat, as the GPT-3 autoregressive language model can produce text that is difficult to distinguish from that written by humans. Continue reading Quality of Deepfakes and Textfakes Increase Potential Impact
By
Debra KaufmanJuly 22, 2020
OpenAI’s Generative Pre-trained Transformer (GPT), a general-purpose language algorithm for using machine learning to answer questions, translate text and predictively write it, is currently in its third version. GPT-3, first described in a research paper published in May, is now in a private beta with a select group of developers. The goal is to eventually launch it as a commercial cloud-based subscription service. Its predecessor, GPT-2, released last year, was able to create convincing text in several styles. Continue reading Beta Testers Give Thumbs Up to New OpenAI Text Generator
By
Debra KaufmanJune 15, 2020
Artificial intelligence research institute OpenAI, after collecting trillions of words, debuted its first commercial product, the API. Its goal is to create the “most flexible general-purpose AI language system” in existence. Currently, the API’s skills include translating between languages, writing news stories, and answering everyday questions. The API is engaged in limited testing and, said chief executive Sam Altman, will be released broadly for use in a range of tasks, such as customer support, education and games. Continue reading OpenAI Tests Commercial Version of Its AI Language System