[ ♪♪♪ ] -[ Makda ] Let's talk about
who's watching you. High-tech break and enter. -Attention, Johanna and Peter,
your home is being attacked. -[ Makda ] What you need to
know to beat the bad guys. This is your Marketplace. [ ♪♪♪ ] -[ Makda ] we are travelling to
a small town in southern Ontario to deliver
some disturbing news. -[ Makda ] A family who lives
here is being watched by the whole world, and
they don't know it. Here they are
renovating their front porch. And here again, sharing more
intimate moments on the back deck, captured by their
own security cameras, and broadcast over the
Internet for all to see. Anyone can keep an eye on
their comings and goings. That's how we've
tracked them down, through their license plate. You can even watch
on their cameras, as we arrive to alert
them to what we've found. -Hi.
-Hello. -How are you?
-Good, thanks. Good, are
you the homeowner? -I am. -My name is Makda. I'm with the CBC.
-Mmm-hmm. We're kind of here
for a strange reason. It has to do with
your security cameras. -Okay. -[ Makda ] I don't know
if you realize this, but those cameras are actually
broadcasting on the Internet. -Really? -[ Makda ] Yeah.
So– How would you know? That's what brought
us here, actually. We wanna show you what we found. Really? Yeah, we can show
you what's happening right now. -Wow. Okay. -[ Makda ] You see
that right there? -And you can just get that? -[ Makda ] That's right, yeah. This is what's
going on right now. What do you think about that? I don't like that at all. -[ Makda ] You had no idea
that this was possible? -No. -[ Makda ] How long have
you had those cameras up? -Six months, maybe.
–Six months. -Yeah.
-Where did you get them? Through Amazon. I ordered them, just online. They were just a
plug-and-play system, so it was easy, no wires. Everything was wireless
through your Internet. I didn't realize that anyone
could have access to that. -[ Makda ] In truth, everyone
can have access to that. On this website, that searches
out and shows security cameras, that are using
default password settings. Toronto, Chatham, Medicine
Hat, we've got a house here in Mississauga. Over here we have
one in Vancouver. There are tens of
thousands of them, streaming from across
Canada and around the world. And people don't know that
these cameras can be accessed by anybody. The website says its just trying
to expose security issues. But these homeowners are
the ones being exposed. Look at this. They're putting
together a puzzle. I can almost see– [ Gasps ] Wow. Clothes on the chair. Wait a second. Oh, my gosh, I can see her. [ ♪♪♪ ] -[ Makda ] Over the next several
weeks we try to figure out where exactly these people live,
so we can warn them. And as we search for clues, we
find more private moments… By the pool, in the kitchen,
even upstairs near their bedrooms–moments not
meant for public viewing. And then one day… So, we've been looking for
clues and today we got a hit. See this right here? This is the first time that
we've been able to make out a license plate. By searching the license
plate and various websites, we narrow it down to an address. But is it the right one? There's a pole here. You can see a light pole. Let's go back to the video. You can see this here. Which seems to match the
Google Maps Street view of this address. We're going to their house and
we're going to tell them what we've been seeing and
what other people can see. We're heading down the
highway, days later, when we think someone's home. And once again, our arrival
is being broadcast over the Internet. Hi. -I'm Makda with the CBC.
-Yes. And the reason why
I'm here, it has to do with your
security cameras. I don't know if
you realize this, but those security cameras
are actually broadcasting on the Internet. -Oh, I didn't know that. -[ Makda ] The homeowner
wants his identity protected, even though his life has already
been watched around the world. We're about to show him how. You can see here
it's a bit of a delay, but then… I'm just going to… -Well, that's no good. See,
that's us right there. Mmm-hmm. -[ Makda ] And these
are your cameras. Did you ever think that
something like this was possible? -No, no. And how long
have you had these cameras? February. Okay, can I ask, why
did you think of getting them, and setting them
up around the home? I have teenage kids and I
wanna see what's going on in my home, especially when
I am away travelling. So, you got them
for the safety of your family? Yeah. And you never
thought something like this, that anybody could just
look into your house? No. -[ Makda ] He struggles to
process the information. Steps he's taken for security
may actually be causing harm. And what exactly
have people seen? -I mean I have a pool, I come
in and out and this and that. If my kids aren't around I don't
need to change or whatever, you know? It's just–
privacy's blown already. I don't know how
you make that right. How are you gonna
have the conversation with your family about this? I'm not sure. Not sure. It's quite
upsetting and disturbing. I'm not gonna lie. That's the privacy of my
home being invaded, right? -[ Makda ] Knowing that these
cameras are playing for anyone to watch, if we figured it out
it doesn't take much for anyone else to figure it out. Well, I'll be disconnecting
them as soon as I go back in. -[ Makda ] So, how did the
privacy of these homeowners get so violated? We do more digging. -We have a delivery. Professional Video Security. -[ Makda ] This camera system
is the same type used by both families. It's sold by a
company called OOSSXX. -Let's get these positioned
so we can spy on you while you work. -[ Makda ] Oh, that
just sounds great. -So, what's this one? This one's the bottom right. -[ Makda ] Set up
is relatively easy. But when it comes to
connecting it to the Internet, the problem becomes clear–the
system does not require you to set a password. The default factory
setting password is empty. This means you do not
need to fill out a password. -[ Makda ] Username, admin. That means once it goes online,
other people could access your cameras too, and
there are no warnings. Okay, that's the problem. We ask OOSSXX why it doesn't
insist on a password like some other companies do. But they wouldn't
answer our questions. [ ♪♪♪ ] -[ Makda ] More
smart home secrets. -What was that? -[ Makda ] And testing
some of the top brands. -I kind of like having the
different security cameras so you know what's going on. -[ Makda ] Will this family pass
a home hack attack? Get more Marketplace. Sign up for our
weekly newsletter at cbc.com/marketplace. [ ♪♪♪ ] -[ Makda ] This is
your Marketplace. [ ♪♪♪ ] -[ Makda ] Across Canada,
homes are being transformed, by so-called smart devices
that promise to make things more convenient, and more secure. It's automated control of
everything from our lights and locks, to our TVs
and temperature. -Alexa, set the
thermostat to 23. -Okay. -Alexa, kitchen light on. -Okay. -[ Makda ] In Canada alone,
more than 100 million of these devices are now
connected to the Internet. But there is a downside. Many people don't know how to
secure their smart devices, allowing hackers and pranksters
to invade their homes, and their privacy. [ Screaming ] -What was that?! [ Screaming ] -[ Makda ] This woman is
terrified by the 21st century version of a crank call. -I can see you. -[ Makda ] Whoever's controlling
her camera can also communicate with her. [ Screaming ] -[ Makda ] Even little
babies fall victim. Traumatized at night by someone,
who's taken control of the baby monitor. The dark side of all this
new technology might not occur to most. -Yeah, that's the indoor. -[ Makda ] Johanna Kenwood and
Peter Yarema think smart devices are both cool and convenient. -I love it. I think it just makes life so
much easier. -[ Makda ] But they're
looking for security, too. -And that's why I kind of
like having the different security cameras, so you know
what's going on. -[ Makda ] So they're careful
to pick top brands that promise security as a priority. Cameras by Nest. And a new lock by
Schlage for the front door. It's connected to a
central hub made by Wink. All of the devices are
controlled by apps on their phones, or by their Amazon
personal assistant. Thermostat is off. Yeah, I wanna
get more of them, just spread them out a little
bit more so I can actually walk through the house and have
all the different ones going. -[ Makda ] But could
devices like these actually make us
more vulnerable? We're about to find out.
-Park right here. -[ Makda ] This van is carrying
three white-hat hackers. Arsenii, Chris, and Michael work
for a company called Scalar. Make sure the
wireless packets– -[ Makda ] Businesses hire
them to test their security, to find weaknesses
before the bad guys do. Here we go. -[ Makda ] Johanna and Peter
have agreed to let these guys do whatever it takes
to hack their home. Okay. -[ Makda ] It isn't long before
they figure out a key component. -Here we go.
There it is, guys. Nice. -[ Makda ] They crack the
password to the home's Wi-Fi network. -Free Wi-Fi, everyone, now–
-[ Makda ] And then discover it's the same password used by
Peter to control the thermostat. All right, connected! -[ Makda ] But to
get full control, they decide they need
Johanna's password, too. Back at headquarters, they
create a phishing e-mail. It's a fake, designed to
trick Johanna into revealing her password.
Oh, she has opened it. Message has been opened. -[ Makda ] If she clicks
on the link they sent, they'll be able to control just
about every smart device in her house. The waiting game
doesn't last long. Here we go. We've got credentials, awesome! -[ Makda ] And just like that,
they're ready to hack the home. [ ♪♪♪ ] You can only see us
when we want you to. -[ Makda ] Don't let
this happen to you. That's pretty terrifying that
they're able to get into so many devices. -[ Makda ] How to fight
back against a home hack. Do you have a story you
want us to investigate? Write to us, at
Marketplace at cbc.ca. -[ Makda ] This is
your Marketplace. [ ♪♪♪ ] -[ Makda ] We're inside
a home in Oakville, Ontario, filled
with smart devices. What is it that you
guys like about having these smart devices?
-Convenience. Just some of the simpler
things, your hands are full, you need a light on. I like the security. I like being at work and having
the notifications going off and knowing what's going on at my
house while I'm away from it. -[ Makda ] But outside,
three guys in a van, who have a point to
prove about that security. They're going to try to hack it. Good to go. Let's take a look. Let's see what we have. -[ Makda ] Do you guys
have a favourite device? That's a good question. I'm gonna say it's probably
the inside camera, just so I can see the doggies
and see what's going on. -[ Makda ] Okay,
what's going on? Did you guys see that just now? Attention, Johanna, Peter,
your home is being hacked! Well, that's surprising. -[ Makda ] Did you expect that? No, not the Nest camera,
'cause they usually– they're supposed to be
the top-of-the-line, most secure out there. -[ Makda ] He just talked
to you through that. I know. -[ Makda ] And did you see
what was going on behind us? Yeah. It's time to turn
up the heat in here. Check your thermostat! Well, our AC's just been
put up to 32 degrees. [ Laughter ] So, it's gonna
get hot in here. -[ Makda ] What do
you think about that? That's pretty terrifying,
that they're able to get into so many devices, especially–
I'd say more so the living room camera, I think. 'Cause that's, you
know, it's our home, it's the inside, we
have a child in here, and to know that
someone can get into it… -[ Makda ] Outside in the
van, they're not done yet. Things are about to get
even more disturbing, as our hackers show some real
damage they can do when they target this personal assistant. Alexa, order a 4K TV. I've added a Samsung 4K
TV to your shopping list. -[ Makda ] Now what if
someone could actually do that? I wonder if they have access
to my full Amazon account, which has my credit
cards, my bankcard. Everything's on there. -[ Makda ] And what if they do? I guess I'm gonna
be really broke soon, owe a lot of money. -[ Makda ] Did you guys– Wanna see what
we're up to outside? Have a look at
your security camera. -[ Makda ] What's going on? Doesn't wanna load up. Oh, there it goes, offline. -[ Makda ] Your
camera's off-line. Yup, so if I was at work
and someone was coming on the property,
I'd have no idea. You can only see us
when we want you to, and that time is now! -[ Makda ] So, he said you
can only see us when we want you to see us. That's so creepy. -[ Makda ] You said it's creepy. Why? What's that?
-That's our front door lock. That's our front
door lock, yeah. I'd say that one's the more
troubling of any of them. And unlocking. I feel unsecure now. [ Laughter ] Hi, guys. I just let myself in. My name is Arsenni and we've
just compromised your house. -[ Makda ] He just
unlocked your lock. He walked in here. How are you guys
feeling right now? To be honest, a
little terrified. -[ Makda ] Why? I'm gonna say especially
if I'm not around, we do have animals and
we do care about their well-being and, you know, we
don't have the fanciest things, but, you know, you
just feel invaded. It's your stuff. It's your home. -[ Makda ] Arsenni says his team
could have done a lot of damage if they really wanted. Like, you saw us, we could knock off the camera,
come over and opened the door, grab a package or
whatever, and leave. -[ Makda ] What advice
do you have for them? How can they make sure
to secure their devices? Well, for one, change
your passwords. You want to have different
passwords for each one of your online accounts. Make sure you have extra secure
passwords for critical stuff like your e-mail or, say,
Nest camera, because the Nest camera is a real
window into your life, right? It really is. -[ Makda ] Strong
passwords are a must. The longer, the better–
at least 16 characters. In fact, try using
a password phrase, three or four words that
don't mean anything together, but you'll remember. Or use a password
manager that generates and remembers passwords for you. As for the makers
of smart devices… Did someone log in? Is it a suspicious login? Is it not your home IP address? -[ Makda ] Arsenni would
like to see some changes. What can the manufacturers do
to make things more secure? The main things that they
could implement would be use of two-factor
authentication, because, you know, having just a password
as the only thing that protects your smart home is not enough. -[ Makda ] Two-factor, or two-
step authentication is already offered by some companies,
like Apple and Google. When you log into your
account on a new device, they ask for a special code
that they send to your phone, confirmation it's really
you and not someone who stole your password. We ask the makers of Peter
and Johanna's devices about two-step authentication
and why it's not required. Amazon and Nest both say
they have that option and encourage people to use it. Schlage says its locks just
took orders from the Wink hub. And as for Wink, after we
share the results of our investigation, it
announces a big change. Wink is now, "Taking
immediate steps to implement two-factor
authentication." Meantime, our homeowners
are taking steps, too. Those unsecured cameras
were quickly unplugged, and are no longer open
for the world to see. Peter and Johanna say
they've learned a thing or two. How are you guys
feeling about this? You've got these devices because
they were cool and convenient. And they were
supposed to be secure. -[ Makda ] Do you
feel that way still? Not really. -I'd probably take the door
lock off the Wi-Fi and just keep it as a keypad. -[ Makda ] Any other
changes you would make? Definitely passwords. I think that will be the first
thing after you guys leave. Everything's gonna get changed. [ ♪♪♪ ] -[ Charlsie ] Undercover
safety spot check. Oh, my gosh.
Oh, my gosh. That baby's like
nine months old! Kids just don't know
that it's not safe. We are seeing injuries that
are occurring at speed and force that we would
not normally see. I asked her to stand up
and that's when we realized that she couldn't stand up. -[ Charlsie ] We visit
trampoline parks across the country, in Ontario,
Alberta, Nova Scotia, and BC. It's an unregulated
industry no one is watching, until now.
Hello everyone. Welcome to The Computer as Collaborator:
Machine Learning and Creative Practice, I'm Marijke Jorritsma and I'm gonna be moderating
today's panel. Today we are lucky to have with us a selection of panelists
who are at the forefront of what I believe is an exciting new paradigm in music making.
Something that challenges our basic assumptions about the role that technology should play or could
play in our creative process. Joining us today is gonna be multimedia artists,
creative technologists and musicians Lucky Dragons. Conceptual avant-pop duo YACHT. And the Google Magenta team who are currently
developing and learning what it means to make machine learning tools to support
the creative process. So we have just an hour today, and this is a big topic
that I think most of you join me in being very excited in learning what this new technology can offer us
as music makers and creatives. So the way that we have organized it is each one of these
groups is gonna give a presentation. Then we're going to have members represent us from each
group join us for a panel discussion. Since we only have an hour we're not gonna have time
for Q&A during this session, but afterwards if you wanna leave the theater directly and meet people
from the groups out in the hallway, you can ask them questions directly. So we're gonna start by calling Lucky Dragons
onto the stage. So my name is Luke.
– And I'm Sarah Rara. And we're just so incredibly excited and honored
to be here with you guys. Yeah, just to be a part of this because we're actively
learning about this as we go, so we feel like in a way we are in that space of kind of productive amateur,
the person who's loving and passionate about the thing they're just beginning to learn, when it comes to
machine learning. And I think the thing to keep in mind is it's not a hard step
to get into something like thinking about how you come up with intelligent machines,
or machines that know how to listen. It's actually something that we do already all the time.
We use machines to expand what we're able to sense, how we're able to listen and see, and to look at processes
and our own biases in new ways. So we can see something where we only have an instant
to look at it, and we can see it all unfolding in front of us. And I think this is something that we're really drawn to
is being able to take an action or… a thought process and to see all the possibilities unfold
in front of you, and to hold that space of possibilities in your mind
as you're working. So I think built into this is that space of expectation,
and thinking about what a next move might be. Or also for us what is really crucial is thinking about
the way that we listen. And the way that our ears and our attention are trained
to listen. And trying to discern how vision and sound might
operate differently, and how sound might be a special case in terms of listening. And the complexity of all the operations we do
as human beings: the way we separate a sound signal into the signal itself, the information it might carry about the
space it's in, the language valances. All these aspects of listening that are really…
for us they also emerge from earlier practices. We often turn back to this Gertrude Stein idea
about repetition. I think the important corollary to keep in mind with all this
is to preserve what's human or to investigate more thoroughly what is human about
music and art, or any kind of interaction. Because we are really compressing what we know about it
and how we behave towards each other and how we are alive. So this Gertrude Stein lecture
from 1935, Portraits and Repetition, she talks about how to basically compress a life
and how to describe it, and she said in order to do that you can't use repetition.
So to be alive is to not repeat yourself, it means that you're constantly insisting or emphasizing.
And this is something I think about expression is that it has vitality, it is constantly reasserting itself
and insisting on itself, rather than repeating. So to describe something, to take it out of its life,
is to have repetition. And repetition is something I find that human beings
are really bad at. When we repeat an action or a gesture it's always
with a kind of difference that over time something builds and it's not true repetition, there's always this transformation
of emphasis that Stein talks about. And her definition of "genius" is to be able to listen
while speaking. So that's something we've also been thinking about in terms of machine learning.
How does one, as a human being, respond while listening? Or perform while listening? And that's something
that musicians are trained to do, trained to perform while also listening.
– I'd say not just musicians, but anyone who's alive. Yeah, any human being. Any conscious being.
– It's the condition of being alive, and you can tell if somebody's alive if you see them
listening and speaking at the same time. And if we can capture that quality of being alive
in the systems we design, in the algorithms we use, then we're preserving I think what's most vital about music.
But I think another key thing to keep in mind is, and this is already being built into the tools we use,
is the role that attention plays in all this. And this is something about how our minds have trouble
with repetition, is that we're constantly going back into our memory, anticipating things,
so this backward-forward kind of attention is something that is really crucial towards developing
an understanding. So we made a series of recordings that are based on
experiments by Diana Deutsch, who's a researcher at UC, San Diego. And she researches Auditory Illusions.
So she researches those spaces in sound where there's a kind of cognitive slippage.
And the interesting thing is for optical illusions there are just an infinite number of optical illusions
but for auditory illusions there's only about 40-50 known auditory illusions, and most were discovered by Deutsch
in the past 20 years. So they're kind of new discoveries and it's also… I point this out just to say that maybe sound
is a special case, sound is different than image, cognitively. But in this experiment, a subject listens to syllable
spoken by a human voice repeated at very fast intervals and alternating between left and right.
And though he signal stays the same the subject will start to generate other syllables
and then phrases and the sentences. It's as if the mind cannot actually hold repetition.
To process a sound signal a human subject will insert difference
and layers of interpretation and language. So we produced this series of recordings and then we had
different listeners listen to the recording and record the words that they generated as listeners,
which are unique to the listener. Let us just listen to a little bit from the beginning,
and you can sort of let your mind drift and hear repeating patterns, some people hear it as music,
some as language. For us this is an interesting case study of something that
is produced by machine but then we sort of complete it as listeners. So hopefully you generated some language listening to that.
Sometimes when we perform that work live sometimes people come up to me after the performance
and say, How could you use such foul language? There's children here. And actually it's all generated
by the listener. So it's all your own imagination. But I think the wildness and slippage that happens
on the side of the listener is something that we're really interesting in how this is being encoded
into the tools that we're using. So we thought it might be worthwhile to look at some
historical ways of representing different forms of listening. So this is Pierre Schaeffer's 'Treatise on Musical Objects.'
So Pierre Schaeffer is sort of known as the first… one of the first to use sampling technology with cutting tape.
– Tape music, early sound collage, musique concrète. So basically he has this very comprehensive idea of
listening that sort of folds together these 4 different modes, and they're always kind of working in a cycle.
And the idea that there's some… There's different modes of listening, and he defined these
as Listening, Comprehending, Perceiving, Hearing. And the idea is that these do not occur kind of
sequentially or with much differentiation, they occur simultaneously with more weight, or emphasis
towards one kind of listening or the other. And I find that that really holds up today, that listening
often blends with hearing, that a kind of reduced listening to the quality of the sound is happening simultaneously
with a more linguistic parsing of the sound. And I think also where it gets really relevant is this idea
that some kinds of listening are learned, so you can learn how to interpret symbols and language
from the field of sound, or you can sort of recognize someone's voices or particular structures to things you're
hearing. But you can also kind of de-learn things. Like go back to this attention to raw, subjective perception.
It's something where, for instance, if you live next to a freeway and over time you learn not to
hear those sounds, this kind of subtractive learning, I think is an interesting polarity. So yeah,
this is the sort of cycle Pierre Schaeffer came up with. I thought it was interesting to sort of use it as a metric
to be like, well which of these forms are we using, and how do they work together?
– And how might they teach a machine to do these subtly different operations simultaneously.
– I think another beautiful way of describing a similar thing is Pauline Oliveros' idea of "deep Listening."
That you have this global and specific listening that have to be always happening simultaneously
and kind of concert with each other. And they happen simultaneously but you kind of
cruise through this terrain of sound by shifting your attention and in many ways that practice is something that's learned.
It's not necessarily an automatic. This is Maryanne Amacher's model for listening,
where instead of composing music and sound she thought of her work as "composing listeners."
– And sometimes this is very literal, she would work… at very high volume to produce these kind of tone patterns
and then the physical response she noticed, the auto-acoustic physical response in your ear to have
these resonant frequencies that were produced literally inside your ear.
– So nothing you do with any external signal. Or cognitive. It's just something that you have the event,
you have the cochlea response and then a neurological response, and they're all kind of
collaborating together to make this… perception and understanding of sound.
– And I think of her relationship to the ear almost like ours to the computer as collaborator.
In her experiment she's almost building in a kind of externalizing way of thinking about the way
she perceives sound. So there's almost an alien quality to the sounds that she's studying, even though they're
perceived all within ones own ear. And it's important to note that her music doesn't really
exist on recordings, there's a few recordings of it, but in general it was music designed to be experienced live,
and in a certain space. So she would do sound that was meant to be heard through walls
or at a distance or over durations of time. And so this kind of grounding of the listening experience
in a physical location, having a body, having your own experience, I think is something
worth taking away from this. So finally we're gonna look at this piece that we did,
a performance, for a choir that was walking along Hansen Dam in the north San Fernando Valley,
which is a 2-mile-long path. And the idea was to have the choir spread out so far
that the audience and the singers were mixed together. And part of this approach is we really come from a kind of
performance tradition that uses scores, that uses text instructions. When we think of a score
that is a way of compressing information, it's a form of compression that is then decoded
and kind of unfurls into this more complex entity. So in our minds it is very connected to digital process,
even though harkening back to this kind of older kind of algorithmic model of these text scores
that generate our work. But to really push on this idea that for anything that we
would call machine learning or machine intelligence there is a human component to it that is deciding what
to include in the training, evaluating, there's a lot of… And built into that, of course, are our own biases
and are own way of… Well it's a reflexive process if you… you see your bias made manifest when you work with
these models. So we wanted to share an example of
something that was very analog, as analog as you can get, it's a performance so what you'll see a little bit is that each
singer has their own unique part of the text. And it's been fragmented across all the people working
along. And for each of the singers they have different modes they
can go into: they can teach their song to somebody else, which is kind of this transitive thing that happens
that goes from Call and Response to kind of… moving towards unison, so from Call and Response,
to a round, to unison, and in this process they almost kind of interpolate between their two individual songs. So we're interested in how somebody owns their own
style, their taste, their way of performing, their sense of being in the body that is almost meditative,
and then figures out how to share it with somebody else, it is something that I think we're thinking a lot about how to
do with computers. This is The Spreading Ground, and it's performed
at Hansen Dam in Pacoima. I have to say I'm really excited to go first because then we
set it up for other people to talk more about this. Yeah, like, please discuss this more. Thank you. But hopefully you can all see the pleasure of singing,
and the pleasure of singing together. Thank you. So next up we're gonna have YACHT
join us for their presentation. OK, so we're YACHT, we're a trio, a slapstick, etc.
For anyone who isn't familiar with our work YACHT was founded in 2002 by this one. It was named
after the sign that he saw in Portland, Oregon that said: Young Americans Challenging High Technology. We don't know what this business did, or if it was a
business, it was kinda pre-internet so we're not exactly sure, It seemed like a good idea. We've kept the acronym through a lot of different
incarnations and despite its affiliation with luxury capitalism, because we like what it articulates,
this idea that we should remain in constant dialog with technology, not in the sense that we're Luddites,
obviously, I'm mean we're here, but in the sense that we never wanna be passive about
the tools we're using and we wanna stay consistently mindful of our relationship with them
and the push and pull between using tools and letting our tools explicitly shape the output of our work.
We wanna preface this also by saying that we're not coders. Our relationship to technology and the context of art making
has always been of on the outside looking in. We believe in tools, in access to tools, and in using tools
in sideways ways beyond their intended parameters to see what interesting things shake out in the process. And admittedly this was a lot easier to do before the larger
consolidation of technological power made it harder to look under the hood
without advanced technical knowledge. But we soldier on regardless, with our main prime-directive
being that we wanna do as much as possible with as little as possible, which is something we picked up
from reading a lot of Buckminster Fuller. and drawing the analogies between that writing and our
own upbringing in sort of decentralized punk communities in the Pacific Northwest. We thought before we started talking about our
machine learning experiments to give you a few projects that'll give you a sense of who we are
and what our values are. So we once released an album cover exclusively via fax
by building a web application that identified people's nearby fax machines,
like at Fed-Ex and Office Depot and sent it directly to them. We liked the idea of transmitting
an image through sound, because that is what music is or should be, in our minds. And also this notion of
activating a dormant technology, this latent space, of these fax machines that are just waiting to be used.
Plus it's a way of showing what's available and of illustrating what's wasted when we move
too quickly onwards to the next thing. This should tell you that we're interested generally in
just fucking with distribution mechanisms. We once released an EP as a 2-disc set.
One including an EP, and one a clear, unplayable disc encoded with all of our music, but we had those CD
manufacturing plants stop manufacturing before the reflective foil was applied,
which is what makes it playable. On the same theme of exploiting inherent vulnerabilities
and opportunities within existing ecosystems, we just recently did this incredibly small project
but released a song called Abolish ICE, that's just me saying "Abolish Ice" over and over again
for a full minute, which is the legal minimum a song can be in order to qualify as a song through digital distribution
platforms and on Spotify. So this is like a virtual picket and something that anyone
can add to a playlist as a subversive act. We just wrapped a 4-year project, which some people in this
room were involved with actually. to reactivate a dormant piece of public art in downtown LA
called The Triforium, which is a 6-storey, 60-ton concrete structure
originally designed to synchronize light and sound into a new art form called Polyphonoptics.
We worked with this amazing interdisciplinary team of coders and artists and musicians to bring back lights
which had been offline for decades and to rescue the original program software
from 8-bit paper tape, and make it responsive again to live musical input.
We shared this because again, it's an example of what can happen when old and new technologies co-habit
and what happens when you really think about how to make use of things as much as possible
with as little as possible. So we became interested in machine learning and creativity
a few years ago, you know, for all the same reasons that all of us probably have.
We've kind of fucked around with every publicly available tool, but we thought we'd share with you
the specifics of a method that has worked for us in terms of making a successful output,
our metric of success being a YACHT song. We're not trying to make metal machine music here,
we wanna make pop music, music that we can stand by and put our name on, and if we're gonna use a totally
new process, it has to produce results that will sound like us feel like us and feel like a step forward,
or at least an interesting step sideways. So we used Magenta's MusicVAE model,
which allowed us to use latent space interpolation to create a pathway between different melodies
from out back-catalog. So this is what it used to look like.
– Maybe it looks different now, we'll hear in a minute. So in order to work this way, we first had to annotate
our entire back-catalog of music into MIDI. So we separated like, bass, guitar, vocal melodies,
drum patterns. And so then we took pairings of those melodies,
fed it into a MusicVAE model at different temperatures and sometimes like,
a bunch of times just to create a big collection of 16-bar patterns that we could go through
and compose the song with. So to demonstrate let's start with just a melody.
So this is a single vocal melody. So this was one of several generated sequences from
an interpolation between 2 old songs: "Hologram" and "I Wanna Fuck You Till I'm Dead." So you wanna step back and talk about why? Generally speaking a lot of artists like to set hard and fast
rules when working with any sort of material, and this material seemed explicitly good for that.
We've always worked with best with self-imposed or external, i.e. monetary constraints. So we made up a set
of rules that we think works and allowed us to refine… this kind of output into a way that felt personable to us. So for example, some things we can't do, we couldn't add
any notes, we couldn't add any harmonies, we can't jam or improvise or interpret anything.
Essentially we can't be creative. So there's no additive alteration, only subtractive or
transpositional. Yeah, but we can assign any melody to any instrument,
like the melody we just played, I said it was a vocal melody, so we just arbitrarily, or not arbitrarily,
but in our hearts decided that would be that. So we could do that. So I'll give ya a keyboard line
or a bassline or a vocal line. And we can transpose anything that comes out
to the song that we're working on to the key of that song. And we can structure it, we can cut it up,
we can basically do collage with all of the output to form something into a song-size piece, I guess. So within that set of rules we can basically give
everyone in this room the same output, set of output, and we would all make completely different songs,
which is kind of the point of all this. So now we're gonna play you what we did
with that generated melody. We decided it would be a vocal line. And it sounds like this. So one of the most interesting and challenging parts
of working with this kind of material is actually performing it physically, 'cos the melodies that it generates have a pretty
minimal relationship to the body, or at least to the competencies of our historically
pretty punk-rock operation, like, I'm not a good singer, not a virtuoso singer,
and one of my fears going into this was like was like I wouldn't be able to sing these melodies, 'cos
they're so incredibly precise there's no room for variation. And even the things that sound simple are really outside
of our own really embodied patterns of play. Yeah, just some of the drum patterns. So we did this not
only just melodies but with drum patterns and then sometimes we transpose the drum patterns
into chord patterns since this model didn't do chords
at the time we started working with it. But playing drum parts that on paper look simple
and even sound a little goofy or stupid ended up being incredibly hard to play, like a fill that was
maybe 2 bars took me 30 minutes to learn just because it wasn't in the palette of drumming
that I'm used to playing. It makes you realize how much of your performance
comes from previous habit and crutches you have internally. Or leaps that you tend to make based on your own
musical upbringing culturally. It allows you to see yourself a little more clearly. And there were many instances in working with this material
that the melodies actually taught us to play better. Or allowed us to imagine different ways of playing.
– Let's talk about lyrics. So the lyrics of the song that you just heard,
or the fragment of the song you just heard are actually the product of a collaboration with the
technologist and poet Ross Goodwin, who, when we started working with him was a free agent
but now got snapped up by Google. So to create the model we built a corpus of 22.6 MB
of lyrics, which is about 4 million words. And that's everything we considered to be our influences,
the stew that makes a band a band, like our own back catalog, the music that was around us
geographically growing up, the music our parents were listening to, our friends' bands,
all the things we would consider to be our influences and more, because 4 million words is a huge amount
of material. The result was this. This document is actually just
one continuous block of output from the model that Ross created that we printed out on a single sheet
of Matrix printer paper because we're divas and it's fun. It looks great. But there's an unbelievable richness to this
material, like the output that the Magenta model creates it's like these sequences of unexpected words
which sometimes have structural cohesion but which fall apart after 2 or 3 phrases. And that's one of
the most exciting and edifying things about it, is to sort of watch how it meanders and tumbles
through language. And to make lyrics, we essentially broke the lines of text
that we thought were interesting over the backs of these generated MIDI melodies,
which of course also have no relationship to the internal mechanics or cadences of human language
or the English language anyway. Which led to all kinds of interesting semantic challenges
like pulling syllables apart kind of against the grain, and singing them in ways that seemed weird
and then I'm certain that when people listen to this whole song or any other music generated
with this material that we create they will constantly be mishearing things and hearing
mondegreens because it's just like not the way language normally breaks out.
And speaking as someone who normally writes lyrics, you know, with some sense of meaning
and some sense of rhythm and cadence, performing generative lyrics in this way
really opened me up to the kind of obvious possibility that words are just sounds, that's how a lot of people
write pop music write, they just think about it in terms of melodic math.
Whose thing is that? – Max Martin. Yeah, Max Merton has this thing about melodic math,
which is just like, these syllables sound good with these notes and that.
It doesn't matter what it means but that's not how I write songs, we write songs.
So it really forces you to experience words as sounds, and allowed us to accept the possibility of meaning
coming after a sound. And beyond sound, the lyrics have this really interesting
quality, like a lot of generated material does across all spectrums, of visual as well, of being
kind of at the edge of meaning. And there are phrases again that seem like they're going
somewhere and then they collapse and then there's these really strong images
that I never would have written, like "I can feel it in my head like a dog in bed,"
for example. A beautiful image and very vivid but not something that you would leap to normally
when you're writing a song. So, OK, disclaimer, it utterly terrifies us to play any new
music, especially at this point in the process, but in the spirit of sharing our process.
– Especially to this audience too. So please keep that in mind. We're gonna play you
the full song, this is a rough and unmastered mix. Could we bump it, too?
– Can we dim the lights? It's amazing 'cos it sounds so much like you
and it's such a far cry from what people think of when they think about making music
with algorithms. They think you're gonna push a button and something…
– Yeah, save it for the panel. OK, we'll move on. Thank you.
– Thanks, you guys. Yeah, sorry, I'm trying to be… This is horrible 'cos
I wanna ask them so many questions. OK, so joining us here are members of the Google Magenta
team. We have Adam and Jesse joining us. who are going to talk about the work they've been doing
developing tools so other artists can do these kind of explorations.
I'll let you guys take it from here. So, I'm Jesse and this is Adam. And it was just fantastic to get to watch the previous
speakers present some of the stuff 'cos a lot of that is kind of a motivation behind what we do.
So we're on the Magenta team, just a quick… pictorial in case you forget who we are. But what is this really? We're in a very fortunate position where we're given a lot of leeway to choose what we
research, and we decided that… So first and foremost, we're a Machine Learning
research group, so we work on new types of algorithms, generative models that can make art and music
and these types of things. But why are we here? We're normally at academic
conferences, so why are we at Loop? And what sets us apart from different groups is that
we're not just interested in machine learning for machine learning's sake, we're really interested
in what happens when you stick a human in the loop and have the creative process, have tools try to create
models that can be part of the creative process and what works and what doesn't, in terms of actually
helping people be creative. And so as part of that, we create a lot of these machine
learning tools, and everything we do is open source, all of our research is completely open. And the reason
we do all this is we're really trying to create a cultivated community of developers and creators
working together to explore this territory together. So that's why it's really important that everything we do
is open source. So I'm gonna get real concrete for a second but then
Jesse's gonna get all philosophical in a minute, too. But one of the main reasons we're super excited to be here
is to announce Magenta Studio. And this is something we've been working on for a few
months now. And it's a set of Ableton plug-ins through Max that kind of
show off the functionality of several of our models. And right now we have 4 different plug-ins, you can
download them for free at that link, it's all gonna be open source and everything.
And we're gonna be adding more over time. And we're hoping that also people from outside of Google
and Magenta will be contributing stuff to this as well. So as you kind of heard a minute ago, it wasn't quite
so easy to use our models a few years back. This is actually before what you were seeing previously
which is a big step up from this. But you used to have to install PyPI packages,
make sure your GPU was set up, this is a command line that you would run
to generate some MIDI files that would get dumped in a temp directory that you would have to search
for. So we knew this wasn't the right solution, but for
computer scientists this is great, it's what we usually use. So we quickly got to work trying to develop some
more usable interfaces and one of the outcomes of that was NSynth Super, we worked on this with Creative Lab
and it's actual physical hardware synthesizer, that uses one of our models that allows you to
morph between the sounds of 4 different instruments that you can control with the corners there.
And we put all of the instructions to build this yourself on GitHub and a bunch of people
took us up on it and actually built them and made music with it, which was great 'cos that
was our goal. And if you haven't heard this before
you should check out that link. One of the problems with this type of model of doing things
though is by the time our research had… we'd gone so much beyond this, and it was kind of
old stuff for us, and to do all this work it takes to launch something
like this for every single on of the models we're developing and the speed at which we wanna iterate was just not
that brings the power of these models to your web browser. and we open sourced that as well, a bunch of creative
coders from around the world developed these creative interfaces that lets you play
around with different types of musical instruments that they dreamed up just tapping into the power of these
models. And what's really cool about this is that once Max added
very recently the support for Node.js, we could now bring magenta/js into Ableton,
and that's what powers Magenta Studio that we'll talk about a little more later after Jesse
gets philosophical. So how many people have heard of Artificial Intelligence? So there's a lot of hype and buzz in this field,
and it's gonna radically transform everything and all that, and there is a lot of hyperbole and exaggeration,
but it kind of does a disservice to the fact that there actually are some really interesting things
that are happening in the field of machine learning that are actually practical and useful
and really interesting things to explore. And so when we make music with machine learning
it's basically algorithmic music right, we're using algorithms to help aid our process,
and that's nothing new. If you went to a parlor, if you were in your parlor on the 18th century, as one does,
and you would play with musical dice, where you would be composing pieces just by rolling dice,
and these dice would be composed by a "programmer," a composer, to actually write little melodic snippets and the
user would create new pieces by just rolling the dice. And it's an old example but a very good analogy
for a lot of the procedural and machine learning generation systems we do today. And just on a very high-level
overview, so when we have procedural generation it relies on a lot of expert knowledge. So person has to
come to the situation, if you have a thought in mind about like, OK, there's gonna be this key structure,
and there's gonna be these rhythmic patterns, you have to really encode all that into a program
that a user can then use the program to generate music. And what machine learning really does it is allows us to
relax those rules of having to know all that expert knowledge,
having to figure out what's the structure underneath, because we can use a lot of data in these series
of algorithms to help uncover some of the structure. that was otherwise hidden unless we could think of it.
So then the programmer comes to this situation where you have the realm of programs that are possible
and then when given a set of music you can train the program to select a given program
to generate music. So the analogy to the dice is like, if instead of having a composer write all the sides
of the dice, if you… got a bunch of compositions from a specific composer
you were interested in and then the algorithm decided what should be on the sides of the dice
so that it can best emulate that style. So there's 2 elements of surprise. One is what actually
happens when you role the dice. But the other is more of a structural semantic thing
about what is the structure these algorithms actually learn. And since it's relatively agnostic to the actual data
that you're training, that's why everyone talks a lot of hype and buzz
'cos you can apply it to a lot of different fields. Some of which are actually useful in a utilitarian sense,
like object detection or detecting cancer in images and these types of things. But since we're artists
you can also do really fun with it too. And, like I love this creepy image of the faces blending.
You wouldn't know how to write rules about how to generate that, these are all fake faces,
these people don't exist. This is data that was taken from a photo booth
and was part of an art project so people knew they were having their pictures taken.
And then the model learns the commonalities among peoples' faces and then we move around
in this learned space and it does some realistic and unrealistic things
but the way it's unrealistic is still compelling and interesting. So that's a lot of those examples, but we just wanna get it
concrete and really not abstract, so what does this mean for music? We'll go through
a couple of case studies with the plug-ins that we're releasing now. So the first we're gonna talk about we call Groovae.
And this one is kind of like a humanized plug-in that you might have used before where you have
a quantized drum beat and you wanna make it feel like an actual drummer's playing it and add a bit of feel to it.
So those typically work by either randomly sampling some velocities and offsets,
or by applying a pretty static groove template across the entire piece of music. What we do here is we use
machine learning to actually learn that transformation. So we took a bunch of professional drummers and had
them play on some MIDI drum kits, and took that output and then we quantized it to make it look like something
that would come out of a sequencer and then we trained the neural network to go back
in the other direction, so to take those quantized beats and figure out how would a person actually play that,
since we had both pieces of data. And now you can put in any quantized drum beat
and use the model to go back the other direction. And what's nice is that since it's statistical you can sample it
a few times to see different options essentially. So let's see how this actually works in Ableton now.
You open up the Groovae plug-in you get your quantized drum beat,
so it's nice but it's pretty stiff right. We can hit the Generate button here,
and now here's the Groovae version. So it's the same underlying beat, but now you can see in
the bottom there's velocities applied. I'll add a little funky bass to just kinda set us off a bit.
But there's velocities applied, some offsets, and then we can switch back to the original
just to hear how different it sounds. So this is again one of the plug-ins that are available. – Yeah, it's like, sometimes you don't know what's missing
till it's there. So another common case is you have a good start
of an idea but you wanna think of different ways to continue it.
What are ways I can develop a theme? So that's what this plug-in does. And the way we train
these models is in the exact same way you would try to do it
yourself, is imagine you have this full sequence but we only show the model part of it. So this is some
melody, and we say, well what would be the next, probabilistically, what's the next thing that should occur. And so then it makes a prediction and then we say,
given all the things you've predicted feed back into yourself and what's the next thing
that would occur. And if you do this with a lot of training data
and millions of melodies, what it can do is basically in order to be able to make those predictions
it needs to learn about things like key and rhythmic patterns. So here's an example with an original melody. And we go over and choose the Melody mode
and select a track and choose a clip. And we can choose the number of variations we wanna
make, we're gonna add 3 bars to it, and we'll click Generate. And then it just add-drops them right in there. So you can see it retains a lot of the original idea
but it does variation, it stays in the same key as the original. And this is just an initial offering. We've done a lot of
research since then, that we hope to also incorporate, this is just an example now that is trained on classical
music, and as an added bonus the audio itself is created
with a neural synthesizer, like mentioned before. So it's a piano recording, but it's actually playing back
with a neural network. In the interest of time, we'll leave it there. But you can see
that it was given the initial seed of the sort of classical melody and it maintained a lot of the gesture, although the long term structure didn't really develop
over the course of minutes or something like that. Another common situation is just getting started
is the hardest thing. Where do I start? Sometimes you got writer's block.
So the Generate function helps with this where you can just ask it to create you some melodies or some drums. So in this case melody just means monophonic,
so it actually made a bassline. But it's gonna be in different keys, 'cos we haven't
conditioned it to be in any particular key. So here's a rhythmic… Something maybe a little bit more classical. Then back to the bassline. And we can also maybe add some drum beats
to it as well, so we generate some drum beats. So, it's not gonna be on the radio any time soon,
but it might be a lot more than where you were starting, and it might give you inspiration to take that
and tweak it and make it into something your own. Alright, so the last one we're gonna talk about is Interpolate,
which uses the MusicVAE algorithm you heard about. And the idea here is to interpolate or morph
between different inputs. And just a kind of a quick overview of how we think about
this. Instead of just taking, for example, 2 melodies
and mixing together the notes from those 2 melodies, that doesn't typically sound very good, so what we do
is we map them into what we call an Attribute Space or a Latent Space that's kind of a learned representation
of the different important qualities of those melodies. And those are what we mix together, and then we map it
back into the space of notes. So what I'll show you in just a second, starting with…
We'll take 2 melodies represented by these 2 dots, map them into that space, and then we'll kinda walk
the line between them, in the Attribute Space, and hear what comes out at each step. And what you're
gonna notice is it's a very smooth transition And the points in between actually sound like real melodies,
they don't sound like mixtures of things. So here's our first melody. And then here's the one we wanna end up at. Alright, we made it. So that example's really nice, 'cos it does sound like
a piece of music on it's own, and you can experiment with this in our new
Ableton plug-in called Interpolate. The example I'm gonna show here really quickly
is actually doing it with drum beats instead. We're just gonna take 2 drum beats and then just kinda
get the mid-point that lives just halfway between them. So let's hear the first 2 drum beats we're gonna start with. And the second one. And we're gonna generate here, and it's actually gonna
output kind of a reinterpretation of the first, so that's what this is.
And then now it's gonna come to the kind of mid-point. So in this, is the model being trained based on what
you put in the first? Good question. These are actually pre-trained models.
But this type of model we're using that pre-trained Attribute Space, so whenever you give it a piece of music
we're mapping that into the pre-learned Attribute Space and then doing that mixture and mapping it back.
So just on that point, so all the models are open and we have all this Python code and stuff you can use
to train your own models, but a big open machine learning question we're really interested in
is how to make these more adaptable, like, right in front of you and on your computer
and learning more directly from your own… Just to be clear, we train these, we use a ton of
specialized hardware and a lot of data that most people don't typically have,
so we're trying to learn how to make models that are still appliable by the end user. I wanna know how we're doing on time.
– I have a question. I think this actually goes till 1. We just have a few minutes, and I… I feel like this should be
a 4-hour long discussion. Why don't we bring our representatives from other
groups back on stage, and just do… I have like, 5 million questions,
as I'm sure the audience does as well. OK, so we've had some really interesting presentations here
in terms of from both the creative artists side getting how you're sort of dealing with this new paradigm
of making music. And then also from the technological side we're you
really have your hands in machine learning. And so for a lot of people out there who want to maybe
start thinking about how to integrate this into their process, I'm curious what you think are sort of fundamental concepts
from both the creative making side, what are things that you are fundamentally embracing
in this new tool. And then maybe also from the machine learning side,
what do people fundamentally need to understand, like, if I'm introducing someone to electronic music
who's been playing analog instruments, I might introduce the idea of modularity or signal flow,
these kinds of things. Or the idea that you can create non-linearly, these kinds
of things. So I'm curious to hear from… you all, what you think our audience needs…
like, how they can approach this topic. Oh my god, it's such a huge question.
– Well, I mean, you don't have to have the whole answer, but it's like, you're in it already, you're doing it. So, I love the diagram that shows the crumpled piece of
paper with the 2 balls in and this flat piece of paper with 2 balls. That was exactly what it was, by the way.
– I think that being able to think about Latent Space is something that's extremely interesting
from an artistic side, because you're like, OK, everything we thought about is a feature,
something that appealed to us, or that we recognized in what we were listening to,
is now, there is a space that captures all those things in a way that something like MIDI, which was great for many
years as a way of representing something as complex as the experience of listening to music, just,
there was a lot that's missing from that. So I think this idea of, you know, understanding
that we've been living in a world of very lossy compression and all of a sudden we're having a lot more interaction
with how things are being compressed. And, yeah, just compression being the relation
between our experience the way we describe things or the way that we use them. So it is this modularity
but it's like a semantic modularity. So would you say a fundamental concept is be prepared
to question an assumption that you had about… what you couldn't control before? Or… Maybe I can help build on this as well.
It's interesting 'cos a lot of these systems we're describing you could do before. If you spent a lot of time studying
music and computers you could come up with some rules that would do similar
things. You can have attributes that you can move around in an Attribute Space and stuff.
What's interesting from this is these algorithms are… You can set it up to learn it from the data, so in some ways
it might extract things from the data attributes that are there that you would normally expect.
But it also can be a helpful thing in terms of reflection, like if you're using it for your own music, you know,
sort of learning, oh I didn't realize that we had this whole thread to our music that wasn't there before. That's something that I actually expected to see more of
going into it. It's just like…
– More of a reflection of your process? For the process to be more of a mirror. I think it actually
has this sort of democratizing quality because the data sets are so large, you're not just
working with you and your collaborators, you're working with the entire history of music in your back pocket.
And so it allows you to stretch inter-generationally and hopefully cross-culturally as well, although I know
data sets are still pretty limited in that department. But I think it's fascinating to see the patterns that emerge
that aren't just patterns in your own work but in the larger thing. Like for example, in the corpus
of text that we have when the temperature is really low you see a lot of
repetition obviously, and those repetitions tend to be the kind of things that come up a lot in pop
music, like love, sex, death, hate, it's like an entire page of 300 instances of the phrase:
"I want you." Or: "I want your brain," which is the scariest. Or, I need you, I want you, these deep undercurrents
of human emotion and familiarity that come up when the temperature is low, and when it's high
you see there's more oddness and these generated proper names and stuff. So I feel like it's a mirror to humanity,
not so much a mirror to the individual artist. That leads me into my next question which is,
what are ways that, from your experiences and I guess from your work with artists,
Does machine learning offer us to sort of… advance our creative processes. What's coming out of it
that you're learning or changing that was a benefit? I think it allows you to see how much choice there is
in every moment you make work. In our little macgyvered system we have, there are so many
rules that we've set into place in order to make things a consistent output within
the ethos of what we're trying to do here. But even when it feels restrictive there's just this infinite
amount of choice still in the smallest things. In production, performance, arrangement, and pattern
and structure, especially structure, 'cos that seems to be where the models fall apart.
But just the intense amount of choice there is in everything, and how you could very easily get bogged
down in individual choice from one moment to the next. But when you remove a lot of these external things
you end up being freed into the details. Did you feel when you were doing the vocal work
that you were being challenged or discovering yourself as an artist in a different way.
– Beyond. – Like in what way? Again, I'm not a virtuoso singer, I'll often joke, like,
I'll sing the same thing 5 times and I'll be like, 5 great takes, and they're all different.
But it does force you to laser in on the sound. Especially when the words begin to lose meaning
and to break the word into the lyric you have to do something really weird with it that's unfamiliar.
I feel like we can all say that, that it challenged us in unexpected ways. I just wanted to pick up on something you were saying
earlier about the reflection of the whole. And I think that's a really important thing with where
these tools are now, is it's important to… really not feel afraid to personalize it. That since it is
taking the mean of, it's trained on these large corpuses, it's there for inspiration and things but to really make it
adaptable you have to do that part yourself at the moment. And I think that is an interesting way that these algorithms
are gonna change in the future. But they're not there at the moment.
– I like that limitation. I mean the fact that the models can't quite spit out
a fully formed pop song, beyond the economic ramifications of what that means
for artists, it's nice to know that there is an actual conversation that's happening where the tool is really good
at this one specific thing, and I'm still good at the things I'm good at,
and I can identify interesting patterns and arrange them and make structure out of it, 'cos that's what art making is
a lot of the time. We've talked about this a little. Your presentation was about
investigating perception and using the tools to sort of reflect on what it means to categorize listening.
– There's a kind of choosiness, I think, you know, to be like, this is good, this is bad.
That sort of has been opened up a little bit now, and there are other processes that are going on
that you maybe are not aware of, like biases you have, things that you… Taste might come into play.
And it is funny, this question of, Oh, can I train it on different data, 'cos you're like…
you always wanna push outside of what's there, but I think it's interesting that these models are very robust.
You can't break them, you can't make broken… I mean, I don't think it's broken, I think it sounds good. I think it's actually hard to make them fix it all.
They're just in a permanent state of broken. Brokenness is what's interesting. This is a concept
I'm trying to push forward, this idea of the computer accent, which is that generated material, like text, visual,
notation data, they all have this weirdness to them that's aesthetically really interesting and also very finite
in the sense that the models will all get more sophisticated very quickly, much quicker than what we probably
anticipate. And that will be gone, that weirdness will be gone,
and we will totally be nostalgic for it in the way that people are nostalgic for vinyl
and tape and analog now. It is also this thing about it being able to do
more than one thing at once, which is something that, like, to make a fully formed pop song you'd have to be able
to do all these different paths simultaneously, and that's something that humans are really good at doing,
even though they have to make these hard decisions. Just the rough edges thing that it's kind of an interesting
thing that normally when people design these algorithms they're trying to optimize something.
But here we don't actually have any metric for what we're trying to optimize, 'cos we wanna optimize
how useful it is for somebody. And that's different for you, and you, and you.
It's a personal thing, so it's a constantly shifting target. So it asks us from an academic standpoint,
what types of models can be more adaptable you actually investigate different math in terms of
what can actually solve that. And when it makes mistakes… it's learning from the data, so the types of mistakes it makes
are more grounded in whatever it is. So NSynth has a very unique sound quality to it,
meaning it's very compressed and the pitch shifts up and down, and there's extra
harmonics 'cos it's not drawing the waveform quite right. But what's nice is those are very different from the types
of artifacts that you would get with other synthesis algorithms
'cos they're sort of based in the data itself. So a lot of people really like that stuff, and we made
a better algorithm, and they're just like, No, I want that 2016 NSynth style.
– Totally. I wasn't gonna tell you, but we hated the way
the NSynth sounded at first. It's like, this sounds like the sound quality is bad.
But then you recognize it as its own aesthetic and then it becomes incredible.
– It's like the TB 303, which was originally created to be… to replace like symphonic sounds, and the people thought
it was garbage. But you just have to change, to re-orientate your perception
and listen differently. So, I think we're just about at the end, unfortunately.
I know, it's a travesty. But for people who wanna get started with this, what are a couple of go-to resources
for people that they may not know? From all of your different perspectives. I'm sorry. Plug. Plug: 'g.co/magenta/studio.'
We've worked hard to try to make it accessible and we would really love feedback. We have discussion
groups and everything, so we would love people to say, Hey, I wanted to do this, This worked for me…
– It sounds like you guys are really looking to… understand how artists are using these tools, and that is
informing your process of development and design. So this is an important time for people who wanna be
involved in what the future of this technology is gonna look like, to get involved and to give
you guys feedback. Couldn't have said it better myself.
– What do you guys think? Same. There's a lot of consumer-facing AI music making
start-ups, apps and tools and stuff and they're honestly mostly for people who want royalty-free
music for their YouTube make-up vlogs, is my vibe. That's valid.
– Not a lot of expressive control in a lot of those tools. So I would really say Magenta is the place to be.
– Honestly, there's also this sense of kind of a more… hack-y approach of… If you start thinking about things
like image and sound, and tools that are designed to do one thing,
if you can repurpose them then you can kind of get into a more wild terrain of things people are working on.
Maybe you found something that is for image recognition but you think about how is a sound, how can you make it
an image and how can you listen to that image. And I think this is a really fertile way to be.
Thinking about materiality of things and what you can do with synthesis is just to sort of start
thinking about things as being sonic in some way. If that sounds vague.
– I love that. Start with a thought experiment.
– I think a lot of these tools are like, OK, we're gonna use image recognition techniques
on sound, and you can kinda do that yourself a little bit if you're a little hack-y about it, especially if you're using
a tool like Max MSP. We used to break it open a lot.
– There's a lot of resources out there. Also if you just want an intro to machine learning and stuff,
there's this blog Distill that does a very good job of visualization and all that type of stuff.
– Learning about learning is very compelling. Cool. Thank you, guys. I hope that all of you guys
have enjoyed the presentations from our panelists. For me it's been wonderful to get to hear all these
different perspectives. I have a couple of announcements, one is…
Actually let's just do a round of applause for everyone.
do you want to learn how you can unlock the bootloader on your Motorola phone that's what we're going to do in this video if you haven't already subscribed make sure you subscribe to the channel and click on the bell icon to get notified of new videos hey YouTube what's up my G here back with another video and in this video I am going to show you how you can unlock the bootloader on your Motorola phone for the purpose of this video I'll be using the Motorola MOTO X core but the process should be the same for most Motorola phones out there the first thing which we need to do is we'll go ahead and go into settings scroll down go into system go into about phone concern about phone go ahead and tap on the build number seven times you will get a message that you are now a developer now we need to go back next go ahead and click on advanced and you will see a new option called developer options go ahead and select this guy once you're inside developer options you need to scroll down and go ahead and enable om locking go ahead and click on enable one more time now if you are not able to toggle this switch which means you're not able to enable om unlocking that means you cannot unlock the bootloader on your Motorola phone certain phones like the Motorola MOTO X for the Amazon Prime exclusive edition are Motorola phones from Verizon you cannot unlock the bootloader once you have been able to am unlocking go ahead and scroll down and we need to look for USB debugging go ahead and enable this guy as well go ahead and click on OK now we need to go ahead and boot our phone into fastboot mode so to do that you need to go ahead and turn off your device once your phone has turned off go ahead and press and hold the volume down button at the same time press and hold the power button till you see the fastboot screen once you're at the fastboot screen go ahead and connect your phone to the computer using the USB cable provided by Motorola alright you do moving on to the computer you need the latest ADB and fastboot and you need the drivers for Motorola mobile devices for a computer be it Marty neck's or windows both of those are linked in the description of this video go ahead and download and install the drivers go ahead and download the ADB and fastboot zip file you will have this file called platform – tools go ahead and unzip it once you've done that you will have a folder called platform tools and inside this folder you will have ADB and you will have fastboot once that is done we need to go to Motorola's website which is also linked in the description of this video we will land on this page and do not unlocking the bootloader of your Motorola phone will result in a factory reset which means you will lose all your media content on your device your pictures your videos your data so make sure you have backed up your data next it wants you to sign up or sign in so I will go ahead and do that once you have signed in go ahead scroll down click on next on this screen it is asking us to get a device ID and the command for that is fastboot OEM get underscore unlock underscore data so we'll go ahead and copy this guy next thing mac and linux users need to open terminal windows users can go ahead and open command prompt or powershell whatever you are easier with and then you need to go to the folder where we had extracted ADB and fastboot windows users can directly go to that folder then go ahead and press the shift key and then right-click and you will get an option to open command prompt err or open PowerShell here the next thing which we are going to do is we will go ahead and check whether our device is being detected in fastboot mode or not the command for that is fastboot devices terminal and PowerShell users need to enter dot slash before the command command prompt users can go ahead and ignore the dot slash once you've entered the command go ahead and press enter and there as you can see my device is being detected in fastboot mode the next command which we are going to execute is the command which will get us the unlock data which Motorola needs so we'll go ahead enter the command once you have entered the command go ahead and press enter and this is our unlock data we'll go ahead and copy all of it so we'll select this guy then right-click and then copy once that is done we need to go back to Motorola's website back on Motorola's website they need the unlock data in a specific format but if you go ahead and click on the data scrub tool it will open up a new window over here we'll just go ahead and do control V or command V next we per head and click on format my data and there it is Motorola has provided us with the data which they need in the format in which they need so we pour hidden select disk I right click copy then we'll go ahead and close this window back on the original page for bootloader unlocked we'll go ahead and paste but we had copied over here then go ahead and click on scan my device be unlocked once this is done we need to go ahead and scroll down and if this button is activated which is request unlock key that means the bootloader on your motorola device can be unlocked go ahead and click on I agree and then request unlock key and then you will receive the unlock key via email motorola is now giving us a warning that unlocking the bootloader will void our warranty are we sure yes ok and there it is we will soon receive the unlock code or the unlock key in our email all right YouTube the next command is to go ahead and unlock the bootloader motorola device so the command for that has fastboot OEM unlock and the key which you got in the email once you going to do man go ahead and press enter let me go ahead and warn you one more time your device will be factory reset and all your pictures data our videos everything will be lost so make sure you have back those up again motorola is also giving us the same warning one more time so you have to enter the command one more time I will go ahead and enter the command one more time and there it is we got the success message that the bootloader on our motorola device was unlocked once we've unlocked the bootloader we can go ahead and reboot our device the command for that is fastboot reboot once you've entered the command go ahead and press enter and at this point a Motorola phone will reboot and you will get a message on your phone's screen that your bootloader is unlocked at this point you can go ahead and disconnect your phone from the computer so there it is YouTube we've successfully unlocked the bootloader on your motorola device I hope my video helped you light shares and subscribes are appreciated feedback and comment more than welcome see you when I see you