Following up on my previous post about Donald Fagen’s “I.G.Y.” ChatGPT identified the solo/fill instrument as a synthesizer, including some guesses as to what model synth and who was playing it, and was very convinced that the sound was NOT made with a wind instrument — particularly not a saxophone.
The answer was based on a pretty good body of information available on the internet. First, there were Reddit threads with synthesizer enthusiasts comparing notes. Second, ChatGPT accessed the gear list and album credits from The Nightfly. Altogether, it’s likely that ChatGPT’s answer is correct.
However, my AI assistant did not find any direct, definitive piece of data to say, “This music was a synthesizer.” So, my question?
If Donald Fagen said in an interview today that that sound was made by him blowing really hard into a saxophone, would that statement override your evaluation of all the other information you cited?
In a word, no.
First, ChatGPT reiterates the preponderance of other evidence to the contrary, including expert breakdowns and gear rundowns from the album.
Next, it tells me that the timbre of the sound doesn’t match typical sax acoustics, “especially in attack, decay, and modulation.” This is kind of interesting, considering that ChatGPT told me previously that it has NOT listened to the audio directly. Is it quoting from someone’s analysis?
Finally, it mentions that Fagen is “famously sarcastic and cryptic in interviews” and claims that Fagen often mocks music journalism and drops red herrings for fun. That’s definitely a rabbit hole to explore; how did ChatGPT arrive at this assessment?
The end result is that even if a contradictory statement came straight from the horse’s mouth today, ChatGPT would still claim the sound is a synth. It would mention the claim as a noteworthy contradiction in future queries, but wouldn’t consider it accurate without more supporting evidence, such as session notes, a demonstrated performance, film, etc.
Why is this important? In the context of this music, it’s not — I love the song, and I’m confident ChatGPT is correct.
However, the way ChatGPT arrives at a conclusion and how it would react in the face of seemingly authoritative contradiction is absolutely important. If you consider this topic in the light of, “How easily can an AI be fooled?” it becomes a much more serious question.
Consider marketing, political analysis, crime investigation, financial analysis, and a myriad of other topics — wouldn’t you like to know exactly how your AI’s opinion can be influenced?







Leave a comment