By
Debra KaufmanMarch 2, 2021
OpenAI’s natural language processing (NLP) model GPT-3 offers 175 billion parameters, compared with its predecessor, GPT-2’s mere 1.5 billion parameters. The result of GPT-3’s immense size has enabled it to generate human-like text based on only a few examples of a task. Now, many users have gained access to the API, and the result has been some interesting use cases and applications. But the ecosystem is still nascent and how it matures — or whether it’s superseded by another NLP model — remains to be seen. Continue reading GPT-3: New Applications Developed for OpenAI’s NLP Model
By
Yves BergquistDecember 5, 2019
We’re not going to lie: the annual “heads up CES” piece on artificial intelligence is a major exercise in hit or miss. This is because technology rarely evolves on an annual time scale, and certainly not advanced technology like AI. Yet, here we are once again. Sure, 2019 was as fruitful as it gets in the AI research community. The raw debate between Neural Networks Extremists (those pushing for an “all neural nets all the time” approach to intelligence) and the Fanatical Symbolists (those advocating a more hybrid approach between knowledge bases, expert systems and neural nets) took an ugly “Mean Girl” turn, with two of the titans of the field (Gary Marcus and Yann LeCun) trading real insults on Twitter just a few days ago. Continue reading The Human Interface: What We Expect From AI at CES 2020