THE YEAR IS 2040, and as you wait for a drone to deliver your pizza, you decide to throw on some tunes. Once a commodity bought and sold in stores, music is now an omnipresent utility invoked via spoken- word commands. In response to a simple “play,” an algorithmic DJ opens a blended set of songs, incorporating information about your location, your recent activities and your historical preferences—complemented by biofeedback from your implanted SmartChip. A calming set of lo-fi indie hits streams forth, while the algorithm adjusts the beats per minute and acoustic profile to the rain outside and the fact that you haven’t eaten for six hours.
The rise of such dynamically generated music is the story of the age. The album, that relic of the 20th century, is long dead. Even the concept of a “song” is starting to blur. Instead there are hooks, choruses, catchphrases and beats—a palette of musical elements that are mixed and matched on the fly by the computer, with occasional human assistance. Your life is scored like a movie, with swelling crescendos for the good parts, plaintive, atonal plunks for the bad, and fuzz-pedal guitar for the erotic. The DJ’s ability to read your emotional state approaches clairvoyance. But the developers discourage the name “artificial intelligence” to describe such technology. They prefer the term “mood-affiliated procedural remixing.”
Right now, the mood is hunger. You’ve put on weight lately, as your refrigerator keeps reminding you. With its assistance—and the collaboration of your DJ—you’ve come up with a comprehensive plan for diet and exercise, along with the attendant soundtrack. Already, you’ve lost six pounds. Although you sometimes worry that the machines are running your life, it’s not exactly a dystopian experience—the other day, after a fast- paced dubstep remix spurred you to a personal best on your daily run through the park, you burst into tears of joy.
Cultural production was long thought to be an impregnable stronghold of human intelligence, the one thing the machines could never do better than humans. But a few maverick researchers persisted, and—aided by startling, asymptotic advances in other areas of machine learning—suddenly, one day, they could. To be a musician now is to be an arranger. To be a songwriter is to code. Atlanta, the birthplace of “trap” music, is now a locus of brogrammer culture. Nashville is a leading technology incubator. The Capitol Records tower was converted to condos after the label uploaded its executive suite to the cloud.
Acoustical engineering has made progress too—it now permits the digital resurrection of long-dead voices from the past. Frank Sinatra has had several recent hits, and Whitney Houston is topping the charts with new material. (Her “lyrics” are aggregated from the social-media feeds of teenage girls, and include the world’s first “vocalized emoji.”)
Limited versions of this technology have been made available for home production. Your own contribution was a novelty remix of “...Baby One More Time,” as sung by Etta James. It was a minor sensation that garnered more than 1,000 likes on BrainShare. The liner notes for this kitschy collaboration were drafted automatically, and listed 247 separate individuals, including prioritized rights holders who were awarded the royalties from advertising sold against the song. You got paid nothing, and were credited last, as “curator.”
‘With some embarrassment, you recall the 2010s, those years you spent pawing at your cellphone. You never got the song you wanted, and the term “DJ” referred to some French guy in a mask who wasn’t even monitoring your serotonin levels.’
The recording industry, as it anachronistically continues to call itself, was nearly bankrupted by digital piracy at the beginning of this millennium. That prompted a shift in thinking. Noting the growth of the adjacent videogame industry, forward-thinking executives began to adopt its business model, moving away from unit sales and toward the drip-feed revenue model of continuous updates organized around verified purchases. A resurgence of profitability followed, although a significant portion of the spoils went to the software developers, who have begun to exhibit their own offbeat version of classic music-business decadence. (A recent profile of the 23-year-old Harvard graduate behind a key hook-selection algorithm revealed the little punk just spent $100 million to buy 11 acres of undeveloped land in the Pacific Palisades, on which he planned to site his 250-square-foot portable micro-home.)
There’s pushback, of course. A viral meme of “The Terminator” in a DJ booth briefly trended on social media before being quarantined to 4chan by the Centers for Disease Control and Prevention. A collective of music critics signed an open letter to the industry accusing it of putting commerce ahead of art. (Many later found work as advertising copywriters.) Most surprising of all, a burgeoning community of upscale youngsters are returning to older modes of production. These “new musicians” insist on playing their actual instruments into studio microphones, then mastering “finished” versions of their individual songs. The throwback tunes are then distributed, in charming antiquarian fashion, through a reconstructed version of the Napster file-sharing service, which one accesses through—get this—a computer terminal.
The doorbell announces the arrival of your food. Literally: “Your dinner’s here,” it says. About time; it’s been seven minutes since you ordered. As you begin to eat, the DJ lowers the volume and recalibrates to an ambient register. The food, like the music, is a little bland—your orders are routed through your refrigerator, which is monitoring your sodium intake—but the meal is adequately filling. Once finished, you decide it’s time to go off-grid.
You utter the command “manual,” and your digital servants come to a stop. The effect is not entirely unlike an electricity blackout, and you panic, for a moment, at the prospect of making an unassisted choice. The first thing that comes to mind is an oldie: “Anaconda,” by Nicki Minaj. (The rapper’s oeuvre is experiencing an ironic resurgence in popularity following her election to the presidency.) As the song begins to play, you permit yourself a nostalgic indulgence.
The 2010s. With some embarrassment, you recall the regrettable years you spent pawing at your cellphone—back then, people still conceived of the Internet as somehow separate from “real life.” The roots of the transformation in music can be traced to that decade, although the technology was clumsy in its infancy. Seeking to differentiate their products in the streaming wars, Google and Apple (and Facebook, after it bought Spotify) spent hundreds of millions of dollars acquiring and developing rudimentary song-selection technology that was, for the longest time, a colossal bust. You never got the song you wanted, voice commands barely worked, and the term “DJ” referred to some French guy in a mask who wasn’t even monitoring your serotonin levels.
But then the convergence happened, and, as the old album title had it, “Nothing Was the Same.” Life without procedural remixing is tedious—two minutes into “Anaconda” and you’re already bored of the song. Your SmartChip registers the signature biochemicals of disappointment, but you refuse to revert to automatic mode. Listening to static music made purely by humans is an inferior experience, sure, but you feel it’s important to unplug from the AI every few days. You’re in charge here, right?
Stephen Witt is the author of How Music Got Free (Viking, 2015).
Comment
"Music to my ears!!" :p
Experiments with parametric and directional speaker systems have been going on since the early 1960s. Ultrasonic sound has much smaller wavelengths than regular audible sound making it much more directional than a traditional loudspeaker system.
Most speakers are designed to throw sound as far and loud as possible. Parametric speakers are more like a laser beam with the sound focused at high intensity into a relatively small area. The result is that two people can be standing only a few feet apart from each other yet only one of them will hear the directional audio waves emanating from the parametric audio source."- Source from website:
http://www.soundlazer.com/what-is-a-parametric-speaker/#lightbox/0/
This seems to be a consumer level version of military weapon LRAD.
Now compare to this - [By the year 2025:] "The civilian populace will likely accept an implanted microscopic chips (sic) that allow military members to defend vital national interests." - Page 46
http://12160.info/group/beyond-high-tech/forum/topics/airforce-2025...
"Destroying the New World Order"
THANK YOU FOR SUPPORTING THE SITE!
© 2024 Created by truth. Powered by