Apple Music Tests New System to Identify AI-Made Music

Apple Music is introducing a new labeling system designed to reveal when artificial intelligence has been used in the creation of music and related content. The company recently informed record labels and distributors that they can begin applying new “transparency tags” to songs uploaded to the platform, marking a step toward greater disclosure about how AI is used in music production.
The new tags will appear in the metadata that accompanies music files delivered to Apple Music. Metadata already includes details such as artist names, song titles, genres, and credits. Apple’s update expands that information to allow content providers to flag whether AI played a role in creating parts of a release.
The disclosure system covers four different categories of creative material. These include album artwork, the audio track itself, the composition (such as lyrics or other musical elements), and music videos associated with a release. Labels and distributors can apply one or several tags depending on where AI tools were used during the creative process.
The responsibility for disclosure will largely fall on labels and distributors rather than the platform itself. Apple said it will defer to content providers to decide what qualifies as AI-generated material. The tags will work similarly to other metadata fields, such as genres or credits.
The catch is that the tagging system is voluntary. If a song or album is delivered without a transparency tag, Apple will not automatically assume that AI was used. The platform has not introduced a verification or enforcement system to confirm whether uploads are accurately labeled.
Apple’s approach stands in contrast to strategies used by some competitors. French streaming service Deezer has invested in technology designed to automatically detect AI-generated tracks. Instead of relying on labels to report AI usage, Deezer analyzes audio files using its own detection tools.
The company says the scale of AI music uploads is already substantial. Deezer reported that more than 60,000 fully AI-generated tracks are now delivered to its platform each day. According to the company, synthetic content represents roughly 39% of the music uploaded daily.
Deezer has also linked AI music uploads to streaming fraud. The platform reported that up to 85 percent of streams tied to AI-generated tracks in 2025 were fraudulent, meaning the streams were created to manipulate royalty payouts rather than reflect genuine listening.
Other streaming services have taken additional steps to manage AI content. Spotify tightened its policies in 2025 by removing songs that imitate another artist’s voice without permission and introducing filters to reduce spam uploads.
Apple’s tagging system does not directly resolve those concerns, but it represents an attempt to improve transparency as the technology spreads. By documenting where AI appears in the creative process, the company hopes the industry will gain clearer data about how widely the tools are being used.
Users have questioned whether labels or producers will actually apply these tags. Many argue that openly marking a song as AI-generated could affect how listeners perceive the music and potentially harm an artist’s reputation. In a market where authenticity and originality remain highly valued, producers may worry that disclosure labels could lead audiences to see the work as less creative or less legitimate. Because Apple’s tagging system currently relies on voluntary reporting, the initiative's effectiveness may depend on whether content providers are willing to prioritize transparency over potential reputational concerns.