By
Paula ParisiApril 24, 2023
In a move it sees as a force multiplier, Alphabet is consolidating DeepMind and the Brain team from Google Research into a unit called Google DeepMind, uniting the teams responsible for Google Brain with DeepMind, the UK-based artificial intelligence research lab acquired in 2014. Collective accomplishments include AlphaGo, Transformers, WaveNet and AlphaFold, as well as software frameworks like TensorFlow and JAX for expressing, training and deploying large scale ML models. “Combining all this talent into one focused team, backed by the computational resources of Google, will significantly accelerate our progress in AI,” the company announced. Continue reading Google Restructures AI Research Units into Google DeepMind
By
Paula ParisiApril 6, 2023
Clement Delangue, co-founder and CEO of New York-based Hugging Face, turned a casual invitation to meet with open-source AI stakeholders during a trip to San Francisco into what is being called the “Woodstock of AI.” In a matter of days, the event ballooned to more than 5,000 people hosted at the Exploratorium on March 31. “We just crossed 1,500 registrations for the Open-Source AI Meetup!” Delangue messaged the RSVP list days before the event. “What started with a tweet might lead to the biggest AI meetup in history.” The 8-year-old company is also making headlines for its new HuggingGPT system. Continue reading Hugging Face Rallies Open-Source AI Community at Meetup
By
Paula ParisiSeptember 29, 2022
Intel announced the consumer GPU brand Arc last year, which the company now says will begin delivering in Q4. The Arc A770 Limited Edition desktop gaming card will be available October 12, starting at $329, “the sweet spot of desktop graphics,” according to CEO Pat Gelsinger, who said the GPU “delivers 65 percent better peak performance versus competition on ray tracing.” Intel says other new GPU models, including the Arc Pro A30M for mobile unveiled last month at SIGGRAPH, will also come to market by the end of the year. The new GPUs feature built-in ray tracing hardware, machine learning capabilities and industry-first AV1 hardware encoding acceleration. Continue reading New GPUs Showcased at Intel’s Innovation Developer Event
By
Debra KaufmanMarch 3, 2021
The National Security Commission on Artificial Intelligence identified China as the first to challenge U.S. technological dominance since the end of World War II. To counter this potential threat to the United States, the 15-member commission issued a 756-page report urging a $40 billion investment in artificial intelligence research and development to be “AI ready” by 2025. The report also called for the U.S. to stay two generations ahead of China in semiconductor manufacturing. To that end, it suggested a significant tax credit for chip makers. Continue reading National Security Commission on AI Pinpoints Chinese Threat
By
Debra KaufmanJanuary 21, 2020
AI can enable many important tasks from manufacturing to medicine, but only if the applications are speedy and secure. Communication via the cloud adds latency and risks privacy, which is why Google worked on a solution — dubbed Coral — that avoids centralized data centers. Coral product manager Vikram Tank described Coral as a “platform of [Google] hardware and software components … that help you build devices with local AI — providing hardware acceleration for neural networks … right on the edge device.” Continue reading Google Bypasses Cloud to Offer AI to Enterprise Customers
By
Debra KaufmanNovember 15, 2019
Microsoft will begin providing customers of its Azure cloud platform with chips made by U.K. startup Graphcore, with the goal of speeding up the computations for artificial intelligence projects. Graphcore, founded in Bristol in 2016, has attracted several hundred million dollars in investment and the attention of many AI researchers. Microsoft invested in Graphcore last December, with the hope of making its cloud services more compelling. Graphcore’s chips have not previously been available publicly. Continue reading Microsoft Pairs Azure Cloud Platform, Graphcore AI Chips
By
Debra KaufmanNovember 15, 2019
Microsoft’s GitHub revealed plans for the Arctic Code Vault to store open source projects on film with 8.8-million pixel frames. The Vault will be constructed in a decommissioned coal mine in Svalbard, Norway, to preserve TensorFlow, Flutter and other open source software for 1,000 years. Svalbard, also home to a global seed vault, is one of the most northern cities on earth, with permafrost that extends “hundreds of meters” below the surface. GitHub also launched its own official mobile app. Continue reading GitHub Is Planning a Vault to Preserve Open Source Code
By
Debra KaufmanAugust 5, 2019
During its Cloud Next 2019 developer conference, Google revealed its Advanced Protection Program would be widely released and Titan Security Keys will be more readily available in retail. The former, which is intended to prevent cyberattacks against high profile targets such as politicians and business leaders, will debut in beta for G Suite, Google Cloud Platform (GCP), and Cloud Identity customers. The Advanced Protection Program “enforces the use” of the Titan Security Key or compatible third-party hardware, blocking access to third-party accounts not approved by admin. Continue reading Google’s Cloud Platform Updates Focus on Security Issues
By
Debra KaufmanApril 4, 2019
Amazon introduced AWS Deep Learning Containers, a collection of Docker images preinstalled with preferred deep learning frameworks, with the aim of making it more seamless to get AI-enabled apps on Amazon Web Services. At AWS, general manager of deep learning Dr. Matt Wood noted that the company has “done all the hard work of building, compiling, and generating, configuring, optimizing all of these frameworks,” taking that burden off of app developers. The container images are all “preconfigured and validated by Amazon.” Continue reading AWS Tool Aims to Simplify the Creation of AI-Powered Apps
By
Debra KaufmanMarch 28, 2019
Google is forming the Advanced Technology External Advisory Council (ATEAC), an external eight-member advisory group to “consider some of the most complex challenges [in AI],” such as facial recognition and fairness. The move comes about a year after Google issued a charter stating its AI principles, and months after Google said it would not provide “general-purpose facial recognition APIs” before the ATEAC addresses relevant policy issues. The advisory group will hold four meetings in 2019, starting in April. Continue reading Google Establishes Advisory Panel to Examine AI Fairness
By
Debra KaufmanMarch 27, 2019
Amazon is teaming up with the National Science Foundation (NSF), pledging up to $10 million in research grants over the next three years to further fairness in artificial intelligence and machine learning. More specifically, the grants will target “explainability” as well as potential negative biases and effects, mitigation strategies for such effects, validation of fairness and inclusivity. The goal is to encourage “broadened acceptance” of AI, thus enabling the U.S. to make better progress on the technology’s evolution. Continue reading Amazon, National Science Foundation to Further AI Fairness
By
Debra KaufmanMarch 7, 2019
Google has unveiled GPipe, an open-sourced library that makes training deep neural networks more efficient under the TensorFlow framework Lingvo for sequence modeling. According to Google AI software engineer Yanping Huang, “in GPipe … we demonstrate the use of pipeline parallelism to scale up DNN training,” noting that larger DNN models “lead to better task performance.” Huang and his colleagues published a paper on “Efficient Training of Giant Neural Networks Using Pipeline Parallelism.” Continue reading Google GPipe Library Speeds Deep Neural Network Training
By
Debra KaufmanDecember 5, 2018
Intel revealed that it has made progress in an anonymized, encrypted method of model training. Industries such as healthcare that need a way to use AI tools on sensitive, personally identifiable information have been waiting for just such a capability. At the NeurIPS 2018 conference in Montreal, Intel showed off its open-sourced HE-Transformer that works as a backend to its nGraph neural network compiler, allowing AI models to work on encrypted data. HE-Transformer is also based on a Microsoft Research encryption library. Continue reading Intel Describes Tool to Train AI Models with Encrypted Data
By
Debra KaufmanNovember 20, 2018
Hive, a startup founded by Kevin Guo and Dmitriy Karpman, trains domain-specific artificial intelligence models via its 100 employees and 700,000 workers who classify images and transcribe audio. The company uses the Hive Work smartphone app and website to recruit the people who label the data, and recently introduced three products: Hive Data, Hive Predict, and Hive Enterprise. Shortly after the product launch, Peter Thiel’s Founders Fund and other venture capital firms invested $30 million in the startup. Continue reading Hive Builds Tailored AI Models via 700,000-Person Workforce
By
Debra KaufmanNovember 6, 2018
Researchers at 20th Century Fox published a paper to reveal how they are using artificial intelligence to analyze movie trailers. Published last month, the paper described Merlin, the code name for machine vision systems examining trailers frame by frame and labeling the objects and events. Then this data is compared to data from other trailers, with the idea that trailers with similar labels will attract similar kinds of people. Movie studios already cull similar data via interviews and questionnaires. Continue reading 20th Century Fox, Google Use AI to Analyze Movie Trailers