How to Use Phonetic Metadata in Revelator Pro

Modified on Wed, 29 Oct at 12:36 PM

How to Use Phonetic Metadata in Revelator Pro

In many markets, listeners search by sound, not spelling. To help your releases be found more easily—especially when written in non-Roman scripts like Japanese, Korean, or Ukrainian—you can now enter phonetic metadata directly in Revelator Pro.


Where Can I Add Phonetic Metadata?

You can add phonetic metadata for:

  • Artist Names
  • Track Titles
  • Release Titles


Why It Matters

  • Improve Search Accuracy: Help fans find your music even if they type or speak the name phonetically.
  • Optimize for Voice Assistants: Improve visibility across Siri, Alexa, Google Assistant, and similar platforms.
  • Support Global Discovery: Make your catalog easier to find in all markets, regardless of native language.

How DSPs Use This Metadata

Phonetic metadata is delivered via the DDEX standard and is supported for indexing and discovery by several major platforms.

While behavior varies per DSP:

  • Apple Music supports DDEX pronunciation fields for metadata compliance and indexing.
  • Other DSPs may use this data to support internal search, catalog matching, or smart assistant compatibility.

Note: Visibility and functionality of phonetic metadata may vary by DSP. Some may use it for search or voice assistant indexing but not display it publicly.

Was this article helpful?

That’s Great!

Thank you for your feedback

Sorry! We couldn't be helpful

Thank you for your feedback

Let us know how can we improve this article!

Select at least one of the reasons
CAPTCHA verification is required.

Feedback sent

We appreciate your effort and will try to fix the article