Project Music GenAI Control, an experimental work from Adobe Research, is setting out to change how people create and edit custom audio and music. The prototype tool lets creators generate music from text prompts, “and then have fine-grained control to edit that audio for their precise needs,” according to Adobe. Designed to help create music for broadcasts, podcasts or other “audio that’s just the right mood, tone, and length,” it can generate music from text prompts like “powerful rock,” “happy dance” or “sad jazz,” says Adobe Research Senior Research Scientist Nicholas Bryan, a creator of the technology.
With a simple user interface, users could transform their generated audio based on a reference melody, adjusting the tempo, structure, and pattern repetition of a piece of music, the Adobe announcement explains. Users can also “choose when to increase and decrease the audio’s intensity, extend the length of a clip, re-mix a section, or generate a seamlessly repeatable loop.”
Adobe’s new GenAI experiment, announced at the Hot Pod Summit in Brooklyn this week, “aims to help people create and customize music without any professional audio experience,” writes The Verge, adding that it “allows users to generate music using text prompts and then edit that audio without jumping over to dedicated editing software.”
While Adobe says the Project Music GenAI Control demo used public domain content for showcase purposes, The Verge notes it remains unclear “if the tool could allow any audio to be directly uploaded” as reference material, or the potential length of clips.
Similar apps are available or in development. These include Meta Platforms’ open-source AudioCraft and Google’s MusicLM, which allow users to generate audio using text prompts. But The Verge points out these competing tools offer “little to no support for editing the music output,” which “means you’d have to keep generating audio from scratch until you get the results you want or manually make those edits yourself using audio editing software.”
TechCrunch points out that Project Music GenAI Control was developed in conjunction with researchers at Carnegie Mellon School of Computer Science and the University of California San Diego, with no plans yet to make it publicly available, though it may be “at some future date.”
No Comments Yet
You can be the first to comment!
Leave a comment
You must be logged in to post a comment.