Madonna On New Album ‘Madame X,’ Working With Maluma & Swae Lee | Backstage Interview | BBMAs 2019

– Nice eye patch. – [Interviewer] I don’t
know if anyone can see it. But I just
– Are you– – [Interviewer] I swear,
she gave me the eye patch. – Well, we’re Madame X-ing together. – [Interviewer] You smell lovely. – Thank you. – [Interviewer] Is that
some sort of MDNA scent? – It’s called Portrait of a Lady. – [Interviewer] Really? – I know. A girl can dream. – [Interviewer] Let’s talk about tonight. – Mmmhmm. – [Interviewer] You just
finished performing Medellín? – Yes. – [Interviewer] With Maluma – Yes, Maluma baby. – [Interviewer] Crazy,
fantastic, hi-tech performance. – Thank you. Yeah. – [Interviewer] When
did you start planning and prepping for this? – Um. Many many many months ago – [Interviewer] Should be
like a year ago. (chuckles) – It wasn’t a year ago, but it was a long time ago. ‘Cause the technology had
to be explained to me. And I couldn’t figure it out. I kept saying that wait a minute let’s get this straight. And there were options. – [Interviewer] Mmmhmm. – So, you know there’s augmented reality. There’s virtual reality. There’s combination of the two. But they take a lot of
time and preparation, and filming and yeah, so. – [Interviewer] So much. How long were you in
rehearsals specific for this to get all the dance move down and – Well, rehearsals for
the personas for the on the green screen, – [Interviewer] Yeah. – couple of weeks. – [Interviewer] Mmmhmm. – And then rehearsals for
the actual performance by myself without Maluma for couple of weeks. And then with Maluma, about a week, yeah. – [Interviewer] Okay. And the Medellín is
the first sort of taste of new music from the Madame X album? – That is correct. – [Interviewer] I feel like there’s more taste coming before the main course am I correct in thinking that? – There are. We’re gonna give you a
little smorgasbord of – [Interviewer] Series
of appetizers (chuckles) – Delights, some appetizers
from around the world, yes. – [Interviewer] Amazing – Mmmhmm. – [Interviewer] There’s a Swae Lee track. – Yes. – [Interviewer] Crave. – Crave. Yeah. – [Interviewer] Why did you
wanna work with Swae Lee? – Because I think he’s really talented. – [Interviewer] Of course. – He is. I think he’s a very good writer. I think he’s a great singer. And he’s so cute. Kid’s important. – [Interviewer] Kid’s important – Yeah he has good energy. – [Interviewer] What was
special about the track, like how did the track come to, was a collaboration two of you writing? – No, I wrote a song with Starrah. – [Interviewer] Okay. – And it was something that I’ve written when we first, when I first start putting the record together. And then I kind of put
it on the back burner when I started working
with all the musicians I was working with in Lisbon. – [Interviewer] Mmmhmm. – And then I went back to the wait, let me revisit all
the first songs I wrote. And I listen to Crave. And I thought I need I need to sing with a man on this song. ‘Cause it’s the song about desire and longing. Longing. Desire and longing. I just. I like the tone of his voice. So, I ask him to be on it with me. – [Interviewer] So it started
out sort of as a solo song and then you thought
– yeah – [Interviewer] you need a male
voice on here to counterpart – Yeah yeah. – [Interviewer] Was it was it hard for you to kind of narrow down the number of songs that
you had going into the album or you just kind of going to there with a concept of like
– Oh – [Interviewer] Here are
specific tracks I want. – No no. I had no idea how many songs I was gonna end up with. I just. The record was born in Lisbon. And I was, originally I work with a
lot of Portuguese musicians. And sang several songs in Portuguese. And then it kind of expanded to different countries To Brazil. To Colombia. And then intersperse, you know working with Mirwais, the French producer. So the song kinda, it’s really a global album. – [Interviewer] Yeah. – So. – [Interviewer] Do you sort of consider it like a concept album in a way? ‘Cause like with the Madame X personas. – I don’t really know what that means. – [Interviewer] Well,
in a sense of you know there’s all these different
personas, you know the equestrian and the teacher. – But they’re all. They’re all extensions of me anyways because I do all those things. – [Interviewer] Right. – Believe it or not I do clean my house. So I consider myself a housekeeper. – [Interviewer] Really? – And I ride horses and I love it. And I can teach you to do
the cha-cha if you want. – [Interviewer] I might fall over. – So, it’s saying that
I am all these things. but also the other reason that I
named the record Madame X is because it was given a name given to me when I was 19 and I
first moved to New York by a woman who I looked up to and admired and she gave me that name because she said she couldn’t recognize all my different personas ’cause I kept changing the way I looked. So, and that was in the beginning of my career when I didn’t think about who I should be or what I should be I was experimenting. – [Interviewer] Mmmhmm. – And so, I felt like I had come full circle and gave the record that name. ‘Cause I’m in the same frame of mind. Does that make sense? – [Interviewer] Yeah. Who was the woman that gave you the name? – Martha Graham. – [Interviewer] Good lord. – The great Martha Graham. – [Interviewer] You went to – Do you know I I danced in her school – [Interviewer] Yeah. Yeah. – Yeah. She was alive
when I studied there. And I danced in Pearl Lang who was one of her
proteges’ dance companies. – [Interviewer] You’ve
eventually met Martha Graham in person at some point later.
– Yes, I did. And we met she was mad at me then she gave me the name Madame X. And then years later when I was successful, she asked me to give her an award. And we traded stories about those days when I was a bad girl. – [Interviewer] You’re working
with Mirwais on the album. – Yeah – [Interviewer] He produced Medellín. – Many of the tracks, yes. – [Interviewer] He produced
other tracks as well. – Yeah. – [Interviewer] What
was it like going back and revisiting, working with him after kind of a little bit. – It was nice. It was refreshing ’cause he’s a real deep thinking philosophical existentialist intellectual. And he likes to argue. And debate. And pontificate. – [Interviewer] Do you
like having him in studio when you’re writing. – I do actually. – [Interviewer] Okay. – Because it provokes thought which is rare these days. And And then it gives you idea for songs and then you dig deep and you start to learn about characters like I have a song on
the called Dark Ballet. – [Interviewer] Mmmhmm. – Which believe it or not, is about, is inspired by Joan of Arc. So, when we were writing the song, we start doing all these
research on Joan of Arc that I never knew about. And I mean I learned things about her that I didn’t know about beyond the broad strokes
that everybody knows. – [Interviewer] Right. – So things like that come up. You know, then we start selling setting filmmakers or or philosophers or you know quantum physics or you name it, so. – [Interviewer] This totally
translates to pop music. Right?
– It It does eventually. – [Interviewer] It does? Eventually. – Yeah, so somewhere in that pop music is deep metaphysical thinking. – [Interviewer] I’m assuming
there’s gonna be a tour maybe at some point? You’re probably not allowed
to say anything right now. – It’s been spoken about – [Interviewer] Okay. – In such a deep way – [Interviewer] Okay. – That I’m actually having
a production meetings. Yes. – [Interviewer] So it’s
probably happening. – Yes. – [Interviewer] May I make a request? – For a song you want me to perform? – [Interviewer] Just to consider – Just a teeny one the tiny one. – [Interviewer] A tiny one. – What do you. What do you
missing from the repertoire? – [Interviewer] Okay. So you’ve had 38 top ten hits on the
Billboard Hot 100 chart. That is a record. – I’d like to have more. – [Interviewer] We can work on that. – You don’t like me anymore. – [Interviewer] Lady, come on. – Man! – [Interviewer] You’re my queen. – Well, prove it. – [Interviewer] I can’t
just make you do something on the charts. – You can do whatever you want. – [Interviewer] Okay. Let’s get back – You’re wearing a patch. – [Interviewer] True. The
patch can do everything. – Yes. – [Interviewer] Back
to my question though. – Yes. – [Interviewer] 38 top ten
hits but you haven’t performed 3 of them live yet. – What? – [Interviewer] I’ll Remember, This Used to Be My Playground, and Rescue Me. Just consider! Those were top 1 hits? Top 10 hits. Top 10 hits. I’ll Remember was No. 2, Rescue Me was No. 9, and uh, what was the other one? This Used to Be My Playground was No. 1! Because they’re very laid back, dreamy songs and they don’t usually go in my like Rahh Rahh Rahh ahh, you know. – [Interviewer] True. Interlude. I don’t know. Rescue Me is kinda punching. – Rescue Me is kinda dope. I’ll actually. I’ll consider that one. – [Interviewer] Thank you Madonna. – Okay. Your love has given me hope. Okay, to quote the song. – [Interviewer] The song? To quote yourself? – Yes. Yes. – [Interviewer] Well Madonna, this has been lovely. – Mmmhmm. – [Interviewer] And I wish you nothing
but the best with the new album. And look forward to seeing you on whatever that tour is that may or may not be announced soon. – Whatever dressing room I might be performing in. – [Interviewer] Sure. Interesting. – You’ll come anywhere right? – [Interviewer] Of course I will. – Small cabaret. – [Interviewer] Of course. – A stadium. – [Interviewer] There’s Starbucks down the street, I’m there. – Okay. See you there. – [Interviewer] Thank you Madonna. – Thank you.

Land Down Under – Men At Work – (Piano cover)

Travelling in a fried-out combie
On a hippie trail, head full of zombie I met a strange lady, she made me
nervous She took me in and gave me breakfast
And she said, “Do you come from a land down under?
Where women glow and men plunder? Can’t you hear, can’t you hear the
thunder? You better run, you better take cover.” Buying bread from a man in Brussels
He was six foot four and full of muscle I said, “Do you speak-a my language?”
He just smiled and gave me a vegemite sandwich And he said, “I come from a land down under
Where beer does flow and men chunder Can’t you hear, can’t you hear the
thunder? You better run, you better take cover.”
(Yeahhh!) Dying in a den in Bombay
With a slack jaw, and nothin’ much to say I said to the man, “Are you trying to tempt
me Because I come from the land of plenty?”
And he said, “Oh, you come from a land down under?
(oh yeah yeah) Where women glow and men plunder?
Can’t you hear, can’t you hear the thunder? You better run, you better take cover.” We are.. Livin’ in a land down under,
Where women glow and men plunder, (yeahhhhhhhhhh)
Can’t you hear, can’t you hear the thunder? You better run, you better take cover.

Jonathan Van Ness on Working With Taylor Swift & Favorite Songs on ‘Lover’ | MTV VMAs

– You have moved into
acting all of a sudden? I think I saw you in Taylor
Swift’s new video right? – Oh my gosh! Was that an acting cameo? Who knew! Yeah, who knows. I’ll
be a scripted person. – Well what’s it like working with Taylor on You Need to Calm Down? – Very surreal, kinda similar
to walking the red carpet VMAs you’re just like, how did I get here? Its like it’s just really
surreal but really cool. – Can you talk about one
of your favorite moments being on set, shooting that video? – When I turned around,
this wasn’t on camera but I turned around and Taylor
was just like next to me and I was like, Hi! And you know, now we’re friends,
so it’s just really great! – So you’re friends like
you guys text? You– – She text me on the way here – About what? – Well I was like, oh my gosh, I think I’m presenting an
award that you’re nominated in, I hope like I get to hug you and then I was like, but I
want to see you afterwards so that was our chat. – So you guys are like
really, really friends? – Yeah! Yeah. – Okay can you talk about Lover? Did you listen to Lover yet? You have to have! – I have not turned it off. I think I’m obsessed with
Concordia and a thousand little, or Death by a Thousand Cuts and I also love Miss Americana
and Heartbreak Prince and I also love those are probably my top three. – But the whole album basically? – Yeah, yes and I also
obviously love Calm Down, but I’ve already like listened
to that a million times. You know how it is when you’re
obsessed with a new album, and you’ve lot got your songs, yes.

Do you work in an English-speaking environment? English at Work is the series for you

Narrator: Hi I’m Neil. Thanks for joining
me on English at Work – a new series of programmes set in an office, full of top tips to help you learn some useful business language which you could use in the workplace. In the next few minutes you can join me on
an introductory tour around one of London’s biggest imitation plastic fruit manufacturers,
called Tip Top Trading. We’re going to hear from some of the employees that work so hard
to keep the business running smoothly. So come on then! Let’s step into the office
and eavesdrop on Tip Top Trading’s possible newest recruit. I say ‘possible’ because she is still being interviewed for the job of Sales Executive Anna: Firstly, this job is an ideal match
for my skills and experience: I’ve spent several years working in sales and I get on
with people easily. Well, I mean, apart from the ones I don’t like of
course! Secondly, I know Tip Top Trading is one of the fastest-growing companies
in London, and I want to be part of that. Paul: That’s absolutely right. Tip Top Trading
is the fastest-growing company in the plastic fruits sector. Narrator: Well said, Anna! If she gets that
job I’m sure she’ll be an asset to the company. The decision is in the hands of Manager, Paul,
who we heard there. He’s a nice guy really, a little disorganised, but when things go
wrong he’s got to take charge a bit like this Paul: Yesterday was not a great day. Two clients
came in with serious complaints. Mrs Kumquat received a delivery
of imitation bananas that were purple – not very convincing, and Mr
Lime ordered grapefruits, but got pineapples. Tom! Tom: Yeah, listen… Paul: Were you responsible for these errors? Tom: Well… yes, but – Paul: Look, mistakes happen. But it seems
that Mrs Kumquat left our offices even angrier than when she came in and she
says she will never use Tip Top Trading again! Tom: I tried my best. Paul: Hmmm. Narrator: Ah yes, Tom! I hadn’t warned you
about Tom, one of the company’s top Sales Executives – he’s good… Tom: (on phone) Tom speaking. Yah! Frankie!
So what’s the latest, are we on? … but his interpersonal skills need working
on. Listen to this! Tom: My computer has crashed. I’ve lost my
phone. And there’s a big, BIG problem with my timetable. I have two meetings
scheduled at the same time with two extremely important clients.
I can’t do them both at once! See what I mean! Now, every office needs a
good office assistant – and Tip Top Trading is no exception. It’s got Denise, who’s there to assist, organise and sometimes make the tea… Denise: Oh sorry excuse me, here’s your tea Paul. Paul: Thanks Denise. Narrator: But goodness! She likes to talk… Denise: Really! ‘Denise do this! Denise do
that!’ I’m telling you Sharon, I’ve almost had enough! I get treated like I’m some kind
of servant! Narrator: That’s Denise! I think we’ll just
leave the office now and let them get on with their work So that’s Tip Top Trading. There are plenty
of other people we’ll meet along the way… so go on, why don’t you join me for English
at Work from See ya!

Shared Responsibility: What This Means for You as a CISO (Cloud Next ’19)

[MUSIC PLAYING] ANDY CHANG: Hi, everybody. I’m Andy Chang, Product Manager
at Google’s Cloud Security team, and I’m
joined here by Dan. DANIEL HYMEL: I’m Dan Hymel. I’m with Capital One and I
perform in our cloud governance and compliance space for our
multi-cloud environments. ANDY CHANG: And Dan and I will
be splitting the presentation duties. I’ll do the beginning part, Dan
will talk about Capital One’s experience living this in the
cloud, and then we’ll wrap up. DANIEL HYMEL: Sure. When I start, I’m going to
maybe ask you a question. Heck, I’ll ask
that question now. What’s in your wallet? [LAUGHTER] So I’ll start with,
actually, a cautionary tale. So stay tuned for that. And you’re going to see a
slide that you’re probably not used to at Google
presentations of this type. Thank you for coming. ANDY CHANG: Thank you, Dan. Cool. So you’re in SEC209,
“Shared Responsibility, What This Means for You as a CISO.” And I think, hopefully,
you’re benefiting from some of the comfiest
chairs for any of the talks, much better than the ones
in the large stage area. So as you’ve
probably– since you’ve been through a few
of these sessions, we’re also taking
questions in the Dory. And so go to your app. If you have questions that
come up as we’re talking, please enter those in and we’ll
handle them towards the end. We’ll be covering a few things. Overall, one of the questions
that I get talking to customers is what is the exact split
on the shared responsibility model? How do I understand it? How do I leverage it? How do I make sure
that the cloud provider is doing what they’re
supposed to do and that my own company is
doing what we’re supposed to do? So we’re going to talk a little
bit about the perceptions of what customers care
about in security, the understanding that
shared responsibility division, focusing on
visibility and control. And then, ultimately, focusing
on a quick architecture example. Then I’ll turn it over
to Dan to talk you through the lived
experience at Capital One, how they’ve been successful
on multiple public clouds. And then we’ll wrap up
with questions and answers. So when customers
talk to me, the things I hear that folks
care about are really that the right data is
delivered to the right customer at the right point in time
for the right purpose each and every time. And that’s some of
the core of security. The right thing and
only the right thing happens when it’s
supposed to happen. In addition, given as we all see
the current threat environment and the activity of
threat actors increasing, it’s important not just to
react to [AUDIO OUT] situations, but also to be able to
anticipate and innovate ahead of some of these threats. In addition, many of you are
in regulated environments, so it’s important
[AUDIO OUT] partner to be able to essentially
enable you to deliver on your responsibilities
while running your businesses in the public cloud. And lastly, and really
the focus of this, is understanding and having
a clear understanding of responsibility model. I think we’re going to
switch over to [AUDIO OUT].. OK. So now we’re– [AUDIO OUT]. I think this is [AUDIO OUT]. SPEAKER 1: Sorry, folks. Give us one second to get
this straightened out. [SIDE CONVERSATION] ANDY CHANG: OK. All right, so this is better. Cool. All right, so we’re
going to talk about– oh, I think it’s still
cutting in and out. Technical difficulties. Cool. All right, can you
guys hear me OK? All right, we’re going
to go with the hand mic and see how this goes. Thank you. So we’re going to– the
focus of this talk is really on the understanding. Enabling you guys
to understand where that split is from a shared
responsibility standpoint. And really, ultimately,
for you as CISOs, you as folks part
of the security part of your organization, enabling
you to actually accelerate your company’s business velocity
with enabling your stakeholders to have and
understand really what is the controllable and
acceptable amount of risk. So really focusing on this area. At Google, the way we think
about the shared responsibility model is really around two key
items, security of the cloud and security in the cloud. And we’ll talk about how
we divvy those things up in a moment. From a core principle
standpoint, security at Google is done defense in depth. We have at least two layers
of protections that are independent from each other
between anything of interest– anything you want to protect
and any kind of bad activity or threat actor– at scale. So, as you can imagine, Google– I think other folks
have said we’re 25% plus of the internet
from a traffic standpoint. Things done at Google for
security or for other things have to be done fully
at scale and work at scale, and by default. How
do we enable our developers, how do we enable the system
so that the controls that are necessary and required
are enabled by default so that folks can run? At the core underlying
this is really the use of strong cryptographic
credentials and identity. And what that means
is that whether it’s a machine, whether it’s
a human, whether it’s the data, whether it’s
the code, whether it’s the underlying
service, they all have unique cryptographic
identities that we can compare against what
should be happening. And provenance, the ability to
establish, through a hardware root of trust, that the
underlying hardware, the low-level software, the OS
software, the applications that are running are all fully
attested to at each stage and that we’re running the
right code at the right time. The other part of this
is that, at Google, we think trust isn’t gained just
through technology, but also through transparency. And really, what that means to
you and to customers in general is thinking about reducing what
we call the unverifiable trust surface. What that means is minimizing
the amount you actually have to trust us as
a cloud provider. Having you being provided
enough information so you can make a
decision of your own to verify the claims
that we’re making. And that’s a core part of
what we’re trying to do. Ultimately, we’re interested
in providing customers the capabilities to help you
build secure applications and fulfill your part of the
shared responsibility model. When we think about the
controls that are in place, we really think about
them in three areas. Underlying control,
things that really help you protect the data;
visibility, the ability to classify the data and monitor
the actions on that data; and then detection
response, the ability to actually validate
the controls running and detect essentially things
that are non-compliant, activities that could be
threats, or from bad actors. And for each of these
areas, we have technology that we provide to enable that. When we think about the
shared responsibility model, it’s important to understand
that the boundaries of that shared
responsibility vary based on which types of
services you’re using. And for many of our
customers, they’re using a spectrum
of services, which is why it’s not
surprising that, as a CISO or as part of the security team,
the details for what service and which part can add
to additional complexity. And so it’s important
to understand where those divisions are. When you’re running on premise– everything in the blue
represents the customer. So when you’re
running on premise, you own all of your
hardware stack, your relationships, the
underlying software running. So not surprisingly,
the full responsibility is really on you
as the customer. When you’re running as an
infrastructure as a service, which is essentially running
your own virtual machines on our platform, but
installing your own images, setting up your own
network architectures, the responsibility for us is
to provide you the ability to log and audit the things
that are running on the system– the network, the underlying
hardware and infrastructure, the pieces that you’re
building on top of. So the services we
provide to you are secure, and then you’re responsible for
putting those things together in secure architectures. When you’re running a
platform as a service– like BigQuery or App Engine,
one of those things– now, really, you’re providing
as code that you’re running. And we’re responsible for
delivering the underlying services that execute that
code in a safe and secure way. And when you’re using
software that we’ve provided, now we’re responsible,
as well, for the code. So, if you’re using
Gmail or Drive or Docs, that code has then become
part of our responsibility. And then the thing
that you focus on is access to those
applications, access to the data that you provide,
and then the data that you’re loading
into those applications. That’s a quick summary
of the different levels, and we’ll go into some
of this in more detail. So, as we talked about
from a Google perspective, we think about security of
the cloud and in the cloud. Of the cloud is about
the core infrastructure that you’re then
using, and the core services to build your
products and applications for your consumers. For Google, we’re responsible
for security of the cloud, and we think about it as, again,
defense, in depth, at scale, by default. We model the
underlying architecture that you’re running on
top of in nine layers. Everything from the
hardware all the way through the underlying low-level
software, the applications, and ultimately to usage. And everything that
we do at Google is rooted in a hardware root
of trust, which is the Titan chip that you’ve heard of
through the last couple of Next conferences. It’s a purpose-built
chip that we’ve built and it sits on our compute
cards, our processor cards. And establishes, on boot,
a cryptographic identity for that piece of
hardware and verifies that all the underlying
pieces of software that are meant to boot are
doing so in the right way, that the hardware
components are correct and configured the right way. And, if any of
those things fail, that card or that
compute instance does not actually boot and
be entered into the fleet. And that allows us to
make sure, once something goes through that, that
it’s in a known good state and it can be put into
the rest of the fleet and start serving customers. That becomes the
underlying piece that drives our servers,
our storage, our network, and then ultimately
our data centers. And what that allows
us to do, then, is reduce the threat
for folks injecting either malicious hardware
or malicious software into our systems. In combination with that, as we
talked about in the beginning, it’s important for
us at Google that we have the ability for
the code that we write to define that the
right identity accesses the right machine, is authorized
by the right code accessing the right data, and it’s in
the right time and context. And that’s done through
cryptographically secure identities. So not only users
have identities. Machines and services
have identities, devices have
identities, and then code and data have identities. And checking all of those things
when we run a Google service is a core part of our
underlying platform. Here’s a stack with
a little more detail of the different
pieces within Google that come into play
to secure each layer. You’ll see that there’s at least
two items at each layer that are put together to provide
redundancy and really defense in depth. Whether it’s the purpose-built
underlying infrastructure or the fact that we,
from a boot standpoint, cryptographically sign all the
pieces of our boot software, from an operating system
and hypervisor standpoint, we use our own version of
KVM where we stripped out a lot of parts of
the virtual machine monitor, reduced its attack
surface and its risk surface. We’ve also then added
sandboxing in so that further creates a level of isolation. From an OS standpoint, we
use our own curated operating system for our host side. We also make that available
as the container-optimized OS for you, as a customer, to use. If you’re using our
container-optimized OS, you can also enable
automatic updates, which will patch the
container-optimized OS so you can leverage and benefit from
the same type of security controls we run
on our host side. From a storage and
network standpoint, unique to cloud providers,
all services, all data, is encrypted at rest and
in transit at Google. The logging of both internal
Googler access, as well as providing audit logs for
your users and your systems and what they do
in your systems. An identity access
management system that allows us to do
fine-grain permissions, and then really
managing those keys that are core part of
encryption at scale. From a networking side, we
have one of the world’s largest private networks. What that means is that from
when your data or your service is touched by a customer
from outside our network, it basically travels wholly
on Google-owned private networking, private fiber,
to the server in our systems. And that gives you not only
performance advantages, but gives you a single threat
to choke from a network security standpoint. There are no additional
actors in the underlying hops. From an application standpoint,
both from the whole software lifecycle, from static analysis
to the patching and checking of the packages that are loaded
in, we have control over that. We also cryptographically
sign each piece of software as it goes through
the development stage, and then check before it’s
deployed that the manifest has all the right policies applied. That’s something
we’ve externalized to customers as a part
of binary authorization. So, if you want to adopt
that same model that you can run vulnerability
checks, various types of patching checks, static
code analysis, and then as long as code passes that
created cryptographic signature and have that checked
before the code is deployed, you’re able to do that through
our binary authorization product. Once the applications
are up and running, they’re protected by
our own global front end and the WAF capabilities and
DDoS capabilities around that. And then we provide, as well,
our own security scanning for L7 web application
vulnerabilities that we use inside of Google as
a product called Cloud Security Scanner for you to
use as a customer. When things are
deployed at scale, we’re very fond of saying that
your first users are typically abusers. So whenever we deploy a service,
we see a lot of attack traffic. Therefore, the built-in
DDoS for our own services, the ability to be our own CA. So it’s very hard for folks
to forge anything related to connecting to Google. And that all services provided
at Google are at full TLS. Finally, from an
operations standpoint, not just trusting us,
but having third parties do compliance checking. The ability to do
live migration, which enables us to do patching
of live running VMs without taking your
services down, allows us to, on a continuous
basis, do that level of patching and keeping
things up to the highest level of security. Then we have full SOC threat
analysis and the ability, from a user standpoint,
to connect to our services through beyond corporate
digital trust type network, as well as for them to do
hardware-based second factor for security key. And that is the structure for
every service created at Google and used at Google. And the tools in
which you would then build on top of when you
use one of our services. When we think about
enabling customers, we do this providing
the same type of services either in the
dark green as products you can consume, in the light
green as either products that you can use from Google or
that you can get from partners, or in the case of the dark
blue, core things we still do by default for you when
you’re running on Google Cloud Platform. We’re going to
talk and highlight some of the key differentiators
on Google Cloud Platform that will help you do your
job better and fulfill your part of the
shared responsibility from a visibility
and control side. So one of the core
things which we announced to general availability
this morning is the Cloud Security
Command Center. That’s that central
pane of glass that you can bring in detections
from Google native products, third-party products, or
ones you’ve written yourself for vulnerabilities and
threats all in one place, and see those in
context with your assets and the business context
of those services and the sensitivity of the data. Gives you one place to look for
visibility, one thing for you to then query, understand your
data, and then the ability to trigger prevention,
detection, and action. In addition, what we feel is
a key part of understanding the attack surface
is understanding where your data is and what
sensitivity level [INAUDIBLE].. What we provide is the same
service we used within Google for data classification,
our cloud DLP API, which is designed to work at data at
exabyte and petabyte scales, which allows you then
to, number one, for data sitting in Google Cloud Storage
or data sitting in Bigtable– BigQuery tables– to
classify, at scale, sensitive data either through
our built-in classifiers, the ability to use
regular expressions, or to express this
in terms of data sets that we can
train our models on. Once you’ve
identified that data, you then have the choice of
multiple de-identification options. Simple things like
substitution encryption, more intricate things like
format preserving encryption, or transformations that
allow the data still to be usable in analytics, but
still parallel serve privacy. In addition, unique to Google is
we provide Access Transparency, which means that we
provide you, in our audit logs, a notice every time a
Googler accesses your data and for what reason. What region that
Googler was from and the case number, in the case
of customer-initiated support events that triggered
that access. This is across a wide
range of Google services, and unlike some of the other
providers, not just restricted to a small set of the
provided services. Also, at Next, we’re
very excited to announce access approvals, which now
allows you to prevent Googlers from accessing data in real
time for a subset of the data. So not only will we notify
you if a Googler is trying to access data, you can
actually get the ability to say you will not allow
that and grant that access, and the Googler will no longer
have access to that data and will not start having
access to that data. The results of these are
surfaced in the Cloud Security Command Center. Also through the
audit logging APIs. One of the key parts
we talked about earlier was trust through transparency. And so part of that is
there’s a lot of things that we’ve told you
about our technology and how we do things. But you don’t have
to just trust us. We go through, twice a year,
a set of certifications. You’ll see some of these here. And those certifications
go on an ongoing basis. Working with your sales teams
and with your account teams, you can get access to some of
the reports of these things. So you can see for
yourself, whether it’s overall global standards or
country-specific standards, how we do against
the requirements and certifications you need
to run your businesses. When we move to control, one
of the key things that’s unique to Google is an emphasis on
providing out-of-the-box, top-down, logically central, but
globally distributed controls. We were the first of
the cloud providers to provide organization level
viewpoint, a top-down resource hierarchy that also
is an IAM boundary and also is a network boundary
so that you have the ability to start your systems and
your developers in a safe mode where they have, by
design, restricted access that you then can
explicitly only grant through the IAM roles. From an IAM standpoint, we have
over 300 curated roles so out of the box, the
separation of duties are built in by those
products requiring explicitly grants for folks to use that. We have hierarchy
and inheritance in our resource hierarchy. So if you’re given
something at a folder level, that person then can have
the underlying roles in each of the projects below that. But if you’re at
a project level, you can’t go up the chain
unless you’re explicitly granted that type of access. In addition, we
have org policies that you can define
at the top level that can be enforced across
all your organization, regardless of the
underlying pieces. And then we’re going to talk
about two other things that affect your
communication pattern. VPC Service Controls,
which provide you the ability to define
service perimeters. So unlike the traditional
L3, L4 networks, which are based
on routes and IPs, and restrictions based on IPs,
as you move to microservices, services then have
unique identities and you have the ability
to block services access to particular types of
sensitive data or projects. And those can be applied
to Google services as well so that you can block,
for example, Google Cloud Storage rights or reads from
a particular set of resources only unless those
services either belong to a particular access level
or have a particular set of permission groups. In addition, we also
provide a similar set of capabilities around your
L3, L4 level networking. So we have the ability
for you to create what’s called a shared VPC
and, in that shared VPC, define all the
firewall rules that apply to all the tenant VPCs. So you have, again,
one logical choke point where you can provide
the administration of your overall network,
separate and independent from the underlying
administrators of the underlying projects. Both of these
concepts are designed to provide you
logical bottlenecks and choke points for
control, but not cost you anything on the performance
side because they’re implemented in globally distributed ways. And those are unique to Google. VPC Service Controls,
as I talked about, is really defining
service-level perimeters, which allow you to put that
around specific sets of sensitive data, allow
you to define access levels. And those access
levels allow you to put policy
requirements, for example, of what kind of geographies that
the data can be accessed from, what kind of restricted
IPs, what potential device characteristics are needed
to access that data. Those access context
levels apply not only to infrastructure
controls like this, but also back into your G
Suite access to Docs and Gmail. So in one construct,
access levels, you can define a set
of controls that apply across both G Suite and GCP. Similarly, from a key
management standpoint, as I talked about,
by default, your data is encrypted at
rest and in transit. But we know for some
of our customers, they have additional either
regular requirements or views of their threat
model or risk that require greater
control of their keys, so we provide a full spectrum. From the left,
default encryption, you don’t have to do anything. Your data is encrypted at rest. Cloud key management
system, which allows you full control
of creation of keys, destruction of keys,
rotation, times, and periods. Full logging. IAM roles on those keys
so you can demonstrate to your regulators
and auditors that you have full control of the keys. Cloud HSM, which is a
hardware-backed solution so that you can have a root
of trust for those KMS keys be in hardware. And it’s a Phipps Level 3
hardware, globally distributed Cloud HCM system. You can also, if you need
to have the root keys be sourced within
your own org, you can use cloud’s essentially
customer-supplied encryption keys, which allows you to push
the encryption keys to us when you need something decrypted. We do not store that key. It stays only in memory
for the live operation and, therefore, you
have not only the root of trust in your
own hardware system, we have no access to
those underlying keys. You can also have, essentially,
the ability to stack your own HSMs in our Colos if
you want that further level, additional control on
essentially customer-supplied encryption keys. So depending on the risk
parameters of your business, you have a full
spectrum of capabilities available to support
data encryption. And before I turn
it over to Dan, I’m going to talk through now
how the various controls get put together when you think
about a secure architecture. So first, most customers
start with a clean slate. They have an existing on premise
or other cloud installation. They have identities set up,
typically not a Google identity out of the box, from an
enterprise standpoint, local compute,
gateways, and then data. So the first thing to do
is to essentially connect to cloud identity, which lets us
federate your existing identity service. And now brings those
identities to be used in Google Cloud Platform. Then, through the groups and
the underlying individuals, assign out the IAM
roles and policies which you want so that folks
have the correct separation of duties. The ability to establish
upfront org-level policies that apply, regardless
of the underlying pieces, across your whole organization. For example, that VMs cannot
have publicly-facing internet addresses. Then the ability
to define, again, from a L3, L4 level, and
overall control of your firewall rules and networking
through shared VPCs. Breaking out from an HA
standpoint and a disaster recovery standpoint, the
various regional pieces that you want to allocate within
each of the regions, the zones, the resource allocation, and
then the underlying subnet connections. By default, and
unique to Google, our networks are global. So you’re able to make
those connections very straightforward. You don’t have to pair
between the different regions. Our VPCs are global by
nature, and you’re able to, therefore, set up HA and DR
in a very straightforward way. Next thing you want
to do ultimately is you have data
sitting near on prem, and you want to bring
it securely into GCP or take the data that we compute
and bring it back to your on prem. So combining Cloud
Router, firewall rules, dedicated secure interconnect,
either through our own peering or through partner
peering, allows you to bring the data in and
out of your area securely. Also, we have VPNs to allow
you to do that as well. Next piece, then, is from
an application standpoint. Protect your application
once you serve it from the typical
attacks like DDoS. So you can either
use our default global load balancer,
which as long as you put that in front of one
of your GCE services, will take advantage of
our global front end. You can use one of our
managed services, which has DDoS built in, or you
can use the Cloud Armor product, which gives you
additional availability and the ability to write
your own rules to fine-grain manage your DDoS and WAF. Next piece, then, is you
want to turn on logging so that you have the monitoring
and evidence of things that might happen to
allow us to do detections. And audit logs are
turned on by default. You can then also look at
VPC Flow Logs, firewall logs, and then general logging. Then you want to turn on
the monitoring, learning, and detection pieces, which
include the Cloud Security Command Center, security
health analytics, the ability to look at stackdriver
monitoring, running security scanner
on your, essentially, web applications. And then if you want to
dump the data into BigQuery for additional processing. You then create a
service project, which allows you
to kind of create that boundary of control for
the services you want to run. And then create kind of a secure
perimeter using VP service controls around that so that you
can restrict which services can access that underlying data. And then, ultimately, bring
those data back into GCP through either private access,
Cloud DNS, and cloud natting. So you can either place
your subnets into GCP or push our subnet pieces out. So, in summary,
those are the steps that you take to pull things
in and are the building blocks of you building
a secure service on GCP, and some of the
pieces we give you to fulfill your part of the
shared responsibility model. So with that, I’m
going to turn it over to Dan to talk to you about
Capital One’s journey. DANIEL HYMEL: Great. Thank you. Are we using this mic? ANDY CHANG: No, we can
use your mic for this. DANIEL HYMEL: OK, great. Andy, thank you for
opening the stage and allowing me to
present and open the windows on what’s actually
going on inside of Capital One. So I promised you
a cautionary tale. And this little
creature’s name is Altria. Altria is a rescue dog that my
12-year-old daughter picked up from the Richmond Animal
League in Richmond, Virginia. What does this dog
have anything to do with the topic of shared
responsibilities, compliance, and controls? On the cover, probably
absolutely nothing. But take a closer look. Take a look at what’s
around her neck. So here’s the tale. It happened shortly
after Google ’18. About a week after Google
’18, this little dog, Altria– love her to death. She has no malicious
intent whatsoever. But she’s a runner. She was trained
as a hunting dog. She’s an American hound. So on one particular night, this
was probably after the fifth escape from the homestead–
we live on a parcel out in the country– we kept talking
about, let’s put it in a security fence,
an invisible fence. And we kept holding off
because the price was too high. So on one particular event,
she left around 8:00 PM. She has free run of
the neighborhood. We live in this river
canyon on the James River, where about a mile on either
side, there are no fences. And you can run from
the Chesapeake Bay all the way to the mountains. On this particular day, it
was about 11:00 PM at night. We were exhausted. We couldn’t find her. We came home. And as I was planning
to go around the house to check things out, I felt
this immense pain in my leg and I jumped and wondered
what the heck happened. So I looked down and on
the corner of the eye, there was a copperhead snake. Unfortunately, that snake took
advantage of the opportunity. And as you know, later that day,
I was in the paramedics box. I was at the hospital. I was being treated for
a copperhead snake bite. So as you know,
in everyday life, there are risk
scenarios everywhere. This is a very good example. Had we actually put
in the security fence, we would’ve avoided
Altria from escaping from our private domain
out into the public. So as a result, we
implemented this control. It’s a shock caller. So as you approach
the fence, she gets a really
enlightening experience. If she’s able to go beyond that
barrier into the public domain, there are some other controls
that we’ve implemented. So what you might not
know about the rescue dogs is that when
you receive them, you receive them with a chip. So if they are
captured in public, there is a way to
identify those. We also added a collar or a tag. The tag used to have only
our home phone number, but she runs so much
that we actually had to put a cell phone number
because we would get phone calls at home when we’re
actually out in the field looking for her. So, as you see now, it has a
lot to do with cloud controls– I’m sorry– with controls,
with shared responsibilities. It’s the whole community. So let me take you into
the Capital One journey. I’m going to open the
window for you for a moment. You won’t see this often. And I’m going to tell
you and share with you a few nuggets on how
we made our journey into the cloud successful. So Capital One, today, has
a significant need for cloud because it enables us to
operate with the speed and agility required to
succeed in the digital age. Our agile sprint teams
work in two-week cycles, and according to
them, infrastructure must not be an impediment. As a result of moving to the
cloud, there are some stats, and these are real numbers. We’ve moved from a
development environment build time of three
months down to 30 minutes. New product features
which used to take a month now take two days. With unparalleled visibility
into our environments, we have an overall
improved security posture. I’m going to talk back on
security in just a moment. Our current cloud
status is this. We have over 8,000 production
applications and services. 6,000 additional
services and applications are in non-production. And nearly 100%
of our production in development and test
servers are in the cloud. Additionally,
moving to the cloud has increased our ability
to secure and manage massive volumes of
high-quality data, operating at a higher speed
with greater resiliency, faster recovery, and at an
extraordinarily lower cost. With Cloud, our
technology organization is free to do what we do best,
build digital breakthrough experiences for our customer. If you are a customer
of Capital One, feel comfortable in knowing
that we have your back and we are doing our very
best using state of the art technologies and best practices
to protect your sensitive data. To make this happen, to make
this security posture happen, we’ve implemented a methodology
around controls and compliance. A lot of organizations might
call this cloud governance or a governance function. So essentially,
that’s what it is. It’s an enterprise
function at Capital One that balances the
need of compliance of cloud controls,
objectives, and the needs of various stakeholders,
from our board of directors to executive management, to
the enterprise communities, like operations, engineering,
info security, and cyber– all of the communities
that come together to help enable our security
capabilities in the cloud. And most importantly,
our divisional customers. These are our
lines of business– our banking community, our
auto finance community, our card community. These are the application
builders and consumers of cloud services. Another way to look at
cloud control and compliance is it’s the totality set
of policies and procedures that help an enterprise
move towards its goals while minimizing
risk and conflicts. So for us, our journey here,
our goal is all in on the cloud, and that’s just
about where we are. So let’s go back to, then,
what is this cloud compliance function? What I’m going to share with
you is a best practice example that is aligned very closely
with the Cloud Security Alliance methodology. So think of this as
a box of cake mix, where you flip that
box on the backside and it gives you
all the ingredients. These are all the ingredients
to make it happen. These are the best practices. You have to include all
of these, in my opinion, in your program for
cloud compliance to have the maximum benefit
to ensure that you’re doing the appropriate
level of due diligence for your environment. I’ll go by these one
by one, and then I’ll give you some more detail
on what they actually mean. So first off, you must
document and maintain a catalog of control objectives. These objectives are partially
derived from Nest 800-53, or FedRAMP Moderate. They are also a
compilation or a derivative of your internal security
policies and procedures. Secondly, for every
single service that our cloud
providers offer, we perform a security
control assessment using a selected set
of security control objectives in that catalog. And, for now, our current
status for our catalog of about 300 control
objectives, we use about 50 of those that are directly
applicable to cloud as it relates to our
shared responsibilities. And, of those 50, 15 of
the control objectives are mandatory
requirements, meaning that, if the service provider– Google– provides
a service to us and it doesn’t meet any of
the 15, that’s a showstopper. So we’ll just tell Google,
we’re not ready yet. We need some more features
before it’s safe for us to consume that service. Next, once you’ve done
the security assessments, you move into the phase of
performing the risk analysis on the gaps that you’ve
identified relative to your control objectives. So depending on which
industry that you’re in, you may have a different
set of objectives than what Capital One has as
a financial service provider. Today, we’re using a composition
of qualitative and quantitative methodologies. We’re looking very closely at
probabilistic risk analysis, which means that there is
a probability in a risk scenario of an event happening
over an annualized time period causing a loss of
money in a particular range. So now that you’ve
identified your gaps, now comes the tough part. This is a continuous process
on risk management, control implementation, control
compliance, control exception processing. You have to implement
your controls. You have to monitor your
controls for compliance. And you have to allow
application owners to make exceptions against using
the services that might not have all the controls
or for which you’re not able to meet all of those. Not a requirement,
but the next one is the divisional
governance advisor. And what we’ve done
is we’ve assigned a technical subject
matter expert like myself to support a particular line
of business that translates everything that we’ve
talked about into terms that they consume. And most importantly,
as well, training. The associates that
perform in this function need to have background
as a subject matter expert in the cloud
provided environment. Back to shared responsibilities
component of this. Andy talked ad nauseam about
this– this is really great– so I won’t go into
any detail here. But I wanted to share with you
that we understand this well, and I’ll just move on
to the next one, which is a payout slide. This slide basically
says, for Capital One, we look at shared
responsibilities from both a platform perspective
and an application perspective. Let’s take the application
perspective first. I’m sorry, the
platform perspective. So my team working
in cloud governance, we do everything that
we talked about before. We assess the services. We ensure that they meet
our security requirements. We work with cloud
engineering teams and we provision the
services and make those available to our
consumers that leverage those services by
way of applications. So we’re accountable
for then assuring that we implement platform
identity and access management, that we
deploy the operating systems appropriately, meaning
that we provision system images. Rather than using the
cloud provider images, we provision our own so
that we appropriately harden them to meet
our internal standards. We’re accountable for what
we put into the cloud, so we take full
responsibility for that. One particular example of
a control related to this is a notion of rehydration. So, as a virtual machine ages,
it takes on vulnerabilities because the vulnerability
aging process, and so we have a requirement
to rehydrate every 60 days. If you go beyond that,
bells and whistles go off. If you go past 90,
bigger whistles go off. If you go to 120, the
canons go off and so forth. So I think you get that. So our cloud
governance function, from a centralized
perspective, is we make controls and scripts
available that either we can run on behalf of the
development community, or that we can give it to them
and they can actually run it. For example, there
might be a requirement to apply labels or tags
to all of your resources. If the development community
forgets to do that, they can run a
script to make sure that they’re compliant to that. Let’s move on then
to the application. If you build it, you own it. That’s the mantra
at Capital One. So if we offer a service
to you, you’re golden. You can use that. It’s been whitelisted. The controls are in place, but
you have some accountability. I’ll talk a bit more
about that accountability. So why are cloud
controls important? I wanted to make sure that you
had full visibility to that. For Capital One,
it’s our mission and our fiduciary
responsibility. If you take a standard document
classification or information handling matrix,
you will find that your proprietary or
confidential data is probably your most important asset and
requires a level of protection beyond those that are
company or public. So what we do and
how we’ve bifurcated this is we look at the controls
in the cloud control catalog in three particular areas. There’s technical,
operational, and management. And I’ll show you, on the next
slide, how we codify those. But most importantly, the
controls have to always be on. The windows have to
always be locked. So that means
security protocols, public accessibility, data
encryption, identity access, and PCI. There is no opportunity. There is no tolerance
for control being off. So where do cloud
controls originate? I mentioned to you a
bit earlier that they might originate from
cybersecurity frameworks, Nest, FedRAMP, and so forth. Here’s a particular small matrix
of the 14 control categories in the Nest
framework, where I’ve bifurcated those by technical,
operational, and management. So we take our Cloud
Control catalogs. We look at a service
that Google has provided. We perform that
security assessment. The diagram below, or the
colorful green, red, blue chart, is the actual
detail of the questions that we might ask. And, incidentally, I
have to say that Google has been an extraordinary
partner in this. Google has our scripts. They know what we require. They know when a service
is ready to meet the higher bar for financial
services industry, and so they are embracing that. They’re working very diligently
to say, hey, Capital One, we’ve got this new service. And they know what
our requirements are. So they know when to
actually make the ask. And finally, we complete
the service assessment. We deliver the package. Now, it’s important to
note here, under Item C, that it’s a
cross-functional team. We include governance, we
include engineering resources, cyber resources, architecture,
IAM architecture and, also, our information assurance
third-party management function, where
we fully validate. We have an auditable
trail that our regulators and our internal auditors
can go back and say, did you ask that question? Google tells a lot of good
things about their services, but we firmly believe
in trust, but verify. So this is the final slide. I’ve got about 15
seconds or so, and then we’ll save time for Q&A.
But here’s the payout, and this is what
resonates with shared responsibilities from an
application owner perspective. Here’s what it means. If you have an application– let’s take a basic, multi-tier
architecture application– where you’re going to
need a load balancer, a web server, application
server, database and file storage, that application
or the type of application may require that you adhere
to an enterprise architecture standard. There may be a defined
architecture already. The architecture might
even say, in order to meet the requirements
for the architecture, there are certain
cloud services that you are mandated to follow. So the application owner
defines the application. They choose the architecture. They select the cloud
services that are approved. This is by no
means the only list of what’s approved
at Capital One, so we could put dot dot dot. After you select the
approved services, then you become more
aware of an application owner of what your actual
requirements for controls are. So if you pick a Compute
Engine, if you pick Storage, you will have some requirements
for identity access, data encryption, and so forth. So this is really
the whole story. There’s so much detail here. Feel free to reach
out to me on LinkedIn if you have some
follow-up questions. And we’ll actually go back to
Andy to close it out for us. ANDY CHANG: Thank you, Dan. So in summary, though
one last slide. [APPLAUSE] So security of the
cloud is something the cloud provider secures. Security in the
cloud is something that the cloud customer
secures for their data and their content. At Google Cloud, we look to
empower you as a customer with the visibility to
control and secure the things that you’re responsible for. And as Dan talked
about, customers can be successful in cloud
using a very thoughtful process, structure, and clean
separations of responsibility within their own org. [MUSIC PLAYING]

Best Practices in Developing G Suite Business Apps (Cloud Next ’19)

In this session, we are going to talk about
developing applications on G Suite, and some of dos
and don’ts of developing applications and how to set
up organizations in this particular session. So my name is Satheesh. I am your host for
the session today. Along with me, my co-presenters
are Monica from Genentech, and we are going to have
Sambit from Google Cloud talk in this session as well. So we will start with
giving an overview of what kind of challenges
our enterprise customers face when it comes to developing
applications on G Suite, and how we are driving certain
industry trends with some of our product lines,
followed by talking about our specific products. And then we’ll have Monica
present how they’re organized, and what are some of the best
practices in their organization with respect to app
development on G Suite. Then we’ll talk about, what
is the future looking like, what are the key trends that we
are seeing in the industry when it comes to developing apps
in the productivity space generally, and within G Suite
as well more specifically. And then we’ll wrap
up this session with an overview of some
of the exciting features that we are announcing today
with respect to G Suite development platforms. So before we get
started, a quick note on how to submit questions. So all of you must
have the mobile app. So you will be able
to submit questions through your mobile
app on this Dory. So you can go to this
particular session details, and then submit
[INAUDIBLE] there. Towards the end, we will try
to address your questions. We’ll also be able
to– hopefully we’ll have time to take some
live questions from this room as well. Sounds good? Perfect. So, without further
delay, let’s get started. So imagine yourself to
be a salesperson, an HR professional,
financial analyst– many different roles
in an enterprise. Your key responsibility
is in driving the business with
respect to your role, with respect to your
area of expertise. When you do this,
you’re obviously using many different
applications. You’re using G Suite, plus
you’re using other enterprise applications as part of this. That means you need some
customizations in order to run your business process. You need all those apps. When it comes to
getting those apps, there are some key
challenges that the business users in an organization face. Number one challenge is
what we call the skills gap. As the business
process owner, you are the expert in
your business process. You know how your
process should run. You know how your
business is run. You know your role
better than anybody else in your organization. When you want to
get those apps, you need to go to your IT
developers, and talk to them, and educate them about
the business process. And then they have the
technical expertise to develop those applications. Do you see the problem there? So on one hand,
the business users are really proficient
with their processes. They have the right
skills for that, but they lack the
technical expertise. On the other hand, the
technical developers have the right
technical knowledge, but they don’t know all
the business processes. This is what we refer
to as a skills gap. This requires communication
back and forth in order to get the right
application that you need. This is the number
one challenge. The second challenge
is, the IT developers, now they have to work with
many different business users in the organization,
across the enterprise, understand those
business processes, and then develop the
applications for them. That leads to scaling challenges
for the IT developers. The resources become
limited as a result of that. Any organization, this
is a very common problem. The third challenge
that we see is that technology keeps evolving. Business processes
also keep evolving. And as a result of that,
it’s hard to keep pace with those changes. Everybody’s always
playing catch-up in order to stay in tune
with these changes that are happening. That leads to delays in
getting the apps that you need. That leads to getting
the updates to the apps that you’re using. That’s the number
three challenge. The net result of all
of these challenges is that the involved
stakeholders get frustrated. There are many
different stakeholders. I have identified three
stakeholders here. Number one is the business user
who needs these applications. Number two is IT developer,
the technically proficient developers. Our number three
is somebody who is tracking the cost,
and the schedules, and managing all these
programs and costs, what I call the IT director here. The business users are
frustrated because they are not able to get the
right apps that they need on time or the updates
that they need on time. The IT developers are frustrated
because their project backlog just keeps growing
as they try to work with many different
organizations in enterprise. The net result of
the IT director is that now they
are facing the cost and the scheduled [INAUDIBLE]. Those are the challenges
that everybody faces– all the stakeholders face. How are enterprises
addressing this? What are the shifts that are
happening in the industry to address this? So number one shift
that is happening is that the application
development itself is being moved to
the business users– closer to the
business users– where they have the right expertise
on the business process and they are in
the best position to find out what
is the application that they need, and
better still, actually develop these applications. This is happening on no-code
tools for the business users to develop those applications. The second shift that is
happening in this industry is that there are lot
of SaaS applications that are coming up– Software as a
Service applications. When there is a
need, it’s probably most efficient to go
and buy an application, as long as there is one
that meets the need. That has led to the growth of
the enterprise marketplaces and the growth of
the ecosystems, where ISVs and third-party
developers develop these apps and make them available to
the different businesses. The third shift is
within the enterprise IT. IT’s role itself is evolving. It’s evolving from one of being
an app developer and an app development
organization to being an enabler for development of
applications, by the businesses and by the business users. The business users in
this slide are also called the knowledge workers. So the knowledge workers
are now developing the apps, and the IT organizations are
became becoming the enablers. They’re providing
the right tools, they are providing
the right data access. They’re providing the right
guardrails– security, et cetera– to make sure
that those applications meet the organization’s needs and
comply with the organization’s policies. In Google and in G Suite,
we are at the forefront of driving these shifts. Number one, with respect
to the first shift, we are providing the
low-code and no-code tools to enable the business users
to develop these applications. Number two, we are
building this ecosystem and we are building
this marketplace where business users in
an organization can go and find
applications that they need and start deploying and
using those applications. With respect to
the third shift, we are also providing the
right tools and technologies that data administrators
need in order to ensure that all these
apps that are being built by the knowledge workers
across an organization remain secure, the enterprise’s
data remains secure, and that the IT administrators
can go on and monitor the application usage and
make sure that they’re able to whitelist the apps
that they allow the enterprise users and the business users
to install and use that. That’s how we are driving these
shifts in the G Suite Developer platform. Getting into the
specifics, I want to talk about the five products
in G Suite Developer platform, give an overview, so
that you can go out and explore further
details on this. So the number one developer tool
that we provide is Apps Script. Apps Script is a
low-code platform. How many of you here
have already heard about Apps Script. That kind of shows how
popular Apps Script is. You can actually
see that there are over three billion weekly
executions on Apps Script. So it’s a low-code
developer platform, and it enables the business
users to quickly build apps. How so? Because it provides
an integrated document environment. It provides APIs for
all the G Suite apps. It also provides security,
in terms of OAuth, et cetera. And it provides an integrated
runtime environment, so that when you’re
building the app, you don’t have to look
elsewhere to think about where you’re going to run that app. So it has that integrated
runtime serverless environment that you can use to go ahead
and run the application. From a best practices
perspective, if you have an
application that is going to be used by, let’s
say, a few hundred users, Apps Script provides the
perfect platform to get started. Apps Script still
requires some coding and some proficiency in coding. This is for what we call
the advanced knowledge workers, or citizen developers. Let’s say you build an
app and it becomes very popular in your organization. Now it needs to be used by,
let’s say, a few thousand users as opposed to a
few hundred users. That’s the time when,
as a business user, you talk to the IT
organization and IT developers, figure out how to
scale up application. And also, potentially, you need
new features– maybe some email or AI capabilities, maybe some
data analytics capabilities. That’s when you can use
Google Cloud Platform, and scale that application,
and build the new features. The second tool that I want
to talk about is App Maker. App Maker is intended to
be a no-code platform, a no-code application
development tool. This is for pure business users,
knowledge workers who cannot code. At this point in
time, App Maker is great for building simple cloud
applications with the data that you are already using
for your business users. If you want something that
is a bit more advanced– if you want more
advanced customization– you can use Apps Script
to customize your app that is built using Apps Maker. So from a best
practices perspective, if you are a business user, you
will start building the app– as long as it’s a simple
cloud application, you should be able
to build with all the visual drag-and-drop tools,
as is illustrated on the slide here. If you want more
customizations, you would go to the IT
department and try to seek some of
their help in order to customize the application. The third tool that
I’m going to talk about is the G Suite Add-Ons. I literally know of
nobody who is just using one or two applications
in their business, right? They’re always using a
suite of applications. For example, if
you’re a salesperson, you’re using G Suite, Gmail,
Events, Calendar, et cetera. But chances are very
high that you are also using a Salesforce or a dynamic
CRM along with this G Suite. So Add-Ons provides the right
tools and the framework for you to get an integrated experience
with third-party apps. That’s what Add-Ons
is intended to do. It provides an
integrated experience. It also provides a
development environment in order to build those
Add-Ons so that they we use multiple applications along
with G Suite, in conjunction with G Suite, in an
integrated experience. So from a best
practices perspective, you’d look for these add-ons– to start with, on the G
Suite marketplace, where the chances are that you’ll
be able to find the right add-on that you need. Otherwise, this is not a
development tool intended for the knowledge workers. Rather it is a framework
intended for use by the knowledge workers. So from a development point
of view, you have to go to IT and ask them to
develop an add-on and make it available
to you, and potentially many of your colleagues
in the organization. The next one is the
G Suite Marketplace. There are over 6,000
ISV applications– both web applications,
productivity tools, add-ons, that are available on
G Suite Marketplace. So if you’re looking to
solve a particular problem, this is probably the
best place to start with. Look to see whether
there is already an application that’s available,
and use that application. And if not, then
you’ll have to look at how to build something in
collaboration with your IT. So if you are in
the IT organization, from a best practice
perspective, you should talk to
them about the app that you need so that, in IT,
you can actually make sure that the application
that you want to make available to the rest
of your organization is secure, it meets all your needs. And then that application can
be whitelisted, and make sure that all the other business
users can use that. The last tool that
I’m led to talk about is the Admin Console. So the Admin Console provides
a number of different tools and techniques to make sure that
the apps in your organization remains secure and the
data in your organization remains secure. With the shifts that
we talked about, now a lot of different
knowledge workers will be building
applications through their entire organization. The role of the IT
now is to become an enabler, a facilitator
for this kind of application development. As a result of that,
it’s very important that IT’s role is to keep the
security of the applications and the security of
the data, and make sure that these apps and data meet
the compliance requirements of the organization. In order to do this,
we are providing a number of different tools,
including whitelisting of the applications. Only the whitelisted
applications can be installed by the
users in your organization. Providing data access
controls via whitelisting APIs and enabling APIs, providing
some data guardrails, as well as monitoring
the app usage and ensuring that the resources
that are allocated from the app are maintained. So those are some
of the ways how we are enabling
the IT to go and be an enabler in your
organization in turn, for knowledge worker
apps to be built. This is an important area
of investment for us. We know that there is a lot
more work to be done here, and we are working on
bringing more capabilities to ensure that, as
IT organizations, you can empower your
business users to build apps and maintain the
security of those apps. So, so far, I gave
you an overview of the different
discrete developer tools and an overview of
what are the industry shifts. I want to take a moment to
summarize some of the best practices that we have learned
from many of the customers that we have spoken to. These best practices, I have
divided them by two personas. One is a knowledge worker, and
the other is IT administrator. So if you’re a
knowledge worker and you are looking to build
an app, your first step should be to identify
what kind of experience your app needs to deliver. Is it a web app that users
are going to access via URL? Is it an add-on that will
be available along with G Suite in the side panel? Or is it an automation–
automation meaning and event-drive app
that automatically does some task for
you in response to some kind of a system event? That would be the first step. The second step is to
lay out the resources that your application needs. These resources could
be, for example, certain compute resources or
certain storage resources. Or maybe you want some
access to resources such as ML models, et cetera. Depending on the
resources that you need, you can think about
what kind of platform that you want to build
your application on. The third is the data
sources and the retrieval. The data sources could
be– for instance, it could be some on-prem system
that you’re already using. If you are in IT, you
have to think about, how will I make this data
available to the knowledge workers so that they
can build our apps? The data sources could also
be some other third-party SaaS services that you
are looking at. And maybe what you want to
access is just a few data items, potentially using APIs. In some cases, you may
want a large amount of data that needs
to be analyzed by the application itself
using some kind of pretty analytics tools. So the fourth step is,
using all of these data from the first three
steps to make sure that you’re choosing the
appropriate G Suite developer tool. If you’re looking to
build a simple application with no code, you will
probably start using App Maker. If you want some customizations
that are a bit more advanced, than you would start
looking at Apps Script, and start using that particular
tool for building your app. If you are looking for much
more advanced data analytics– crunching a large amount
of data, using some email, or if you are looking for
advanced storage such as Cloud SQL, then you would be thinking
about building your app on GCP, the Google Cloud Platform. Those are the kind of things
that you should look at. So now, once you
build this app, you should also think
about how to share this app with the other
users in the organization. This could be, for
example, even IT, providing some amount
of pre-built code for the other knowledge
workers to develop apps on. Or it could be IT enabling
all the enterprise users to use this app via a private
listing on the enterprise marketplace– on the
G Suite marketplace. Or if you’re even looking– if you are a third-party
developer or an ISV, you can use the G
Suite Marketplace as your distribution
platform so that you can reach many different
enterprise customers and have them use
your application. Now, looking at it from an IT
administrator’s perspective, the best practices
are– number one is, make sure that your
business users, the knowledge workers in your
organization, are empowered and they’re aware of the tools. Make the right tools available. For example, you
may want to make App Maker available
for all your business users in the organization–
enable it for them. Or you may want to build some
community around the knowledge workers so that they can
collaborate with each other, share information
with each other, and build the
application on their own. This is important for
you as an IT organization because that reduces
the load on your– and the stress on
your organization, by moving the applications
closer to the business users. The second best practice
is to establish data access and connectivity. If you want your
knowledge workers to be able to build apps
around on-prem data, make sure that
data is available. And the third best
practices is to enforce security and governance. Now when you’re looking
at enabling your business users to install applications
from the marketplace, make sure that they’re secure. If you are enabling
your knowledge workers to build those
applications, then make sure that
those apps are also secured before they are widely
used in your organization. Or you may want to establish
some data guardrails or some quartiles in
the compute to make sure that those apps comply
with those limitations that you enforce. So those are some of the best
practices that we have learned from many different customers. So at this point, I would
like to invite Monica onstage to talk about application
development, and Genentech, and how they’re organized. MONICA KUMAR: Thanks, Satheesh. Hi, everyone, welcome
to the session, and it’s great to be here. Thanks to Sambit and Satheesh
for inviting us here. I have a couple of my colleagues
from Roche and Genentech. So– glad we could
get a team together. So I want to start off with
talking about who we are. So maybe some of the US-based
folks may know Genentech, but Roche is basically a global
pharma company based in Basel, Switzerland, with around– you can see we have
over 100 locations worldwide, with around
95,000 employees. And Genentech, which is a
US-based biotech company, was acquired by Roche in 2009. And ever since, we have been
a member of the Roche group. Another fun fact–
I mean, it’s really a huge number– the 11 billion
Swiss francs in R&D investment. So we basically are focused
on four therapeutic areas– oncology, immunology,
neuroscience, and infectious diseases. And really, we have both
the diagnostics and pharma divisions under one roof. And this gives us the
unique opportunity to actually look at
the patient health care across the whole
spectrum, so right from prevention, diagnosis,
treatment, and then monitoring. And our mission
is really to find those unique and best solutions
to improve our patients’ lives. To really support our business
and to fulfill the mission to have the best solutions
for our patients, we are looking at,
from an IT perspective, how can we actually
simplify the landscape, empower teams with
the right tools, and also support these
new ways of working. Our business is going through
a major transformation today. And what you see on
the left hand side– and just to give
you a background, Roche migrated to
G Suite in 2013. And prior to that, because the
company had been in business for more than 20, 30
years, as some of you know, you tend to build
up on legacy applications, legacy platforms. And lots of custom solutions on
those platforms had been built. So we had sort of a messy
application landscape. And we also have– ever since we’ve
moved to the cloud, we also got these
third-party apps that were sort of
confusing our end users– when do I use this versus that? Microsoft was embedded
in the organization before we moved to G Suite. So a lot of the
questions is, when do I use SharePoint versus
Team Drive or Sites? And so our leadership looked
at this last year, and we said, there is a certain power
in offering our end users a default. And that
default is actually G Suite. So we believe G Suite offers
the right capabilities to make our end users as
productive as possible. But along with G Suite– So the G Suite ++, is really
about these third-party apps that we have also. So we use Smartsheet,
Box, Trello. All of these apps actually
add to that experience, they enhance, and
they meet the gaps that we have just in the
basic Collaboration Suite. So how are we organized to
support this very large, very complex organization? We have a global
IT team oversees that looks at where
is the business going, and what are the enterprise
solutions we need to provide our customers so that they
are not waiting for this and having to do all
this work on themselves. So for example, we are focused
on personalized health care, in the Roche science
infrastructure, ERP, and many cloud capabilities,
even around automation. So that’s something
that global IT provides, those platforms and tools. The functional IT is basically
embedded in the business. They actually have
the closest proximity to what’s going on in
each business division. So for example, our
business functions can be from research,
manufacturing, diagnostics, commercial. So each of these businesses have
their own individual demands, and they have their own
business-critical applications that they work with. And that team actually
sits, and delivers, and drive that global
IT strategy forward. And then, of course,
we wouldn’t be here and be able to do what
we do without hundreds of these knowledge workers
who are both developing, but they’re also
consuming these services. But they are the ones
that are actually building these solutions,
using some of the development platforms we have. And we have a wide spectrum. Given the application
landscape that we have and the complexity of
the business demand, too, we have every– low-code, to medium, to the very
complex apps, a wide spectrum there. And so in the low-code,
we have seen a lot of– because we’ve been on
G Suite for a while. We’ve seen lots and lots
of knowledge workers build app scripts for many,
many different solutions that they want. So for example, Apps Script
comes embedded within G Suite. It gives you the ability to
connect with the G Suite API. So anyone who has
curiosity to solve a problem within their
own group can just pick it up and get started. It offers the integrated
serverless runtime, and it’s no additional cost. So I think this is
something that we have seen grown very organically. We didn’t have to do a whole
lot to support the organization. This is something
people just ran with. In the medium complexity,
we have Apps Script or other web apps
that have evolved to a higher complexity,
where we are seeing the use of GCP and APIs. In fact, we
ourselves, in IT, have built lots of global solutions,
including our employee directory, which
is called Peeps. We have built that
on GCP, leveraging our identity management
systems, HR systems, bringing together the
data so that that Peeps app can be available
both on Chrome as well as on a mobile device. But I think in the last sort
year and a half, two years, we’ve seen a demand for more
intelligent, contextual apps that will reduce the friction
or the barrier of entry to use them. And these apps could be using
some of the cloud technologies like the natural language
processing, machine learning, and AI. We are actually signed
up with Dialogflow, which is part of the
Cloud AI stack on GCP. And we have about 70 digital
assistants and chatbots, either in a PoC or development stage. So there’s huge
interest and a demand from the business in this area. And again, we are
integrating with some of our big third-party systems,
like ServiceNow, Workday, and SAP as well. So today, I actually want to
talk about two use cases, both built with Apps Scripts. And both of these
actually come from our pharma technical business
operations team, which is basically manufacturing. So this team actually has two
manufacturing pilot plants here in South San Francisco. And they really
wanted to have a tool that enabled to do some
sort of workforce planning– so for both technicians
to be able to plan, like, the next weeks and what’s in
the pipeline, and for management to have oversight over
the activities happening in these plants. And so they looked
at– obviously, there are third-party
tools available. There’s a cost associated
with that as well. But given that our
Genentech processes are so customized to the molecules
and the experiments that are being run
in these plants. Just to buy an off-the-shelf
product wouldn’t work. And they could also
have gone to IT. But IT also adds to the
overhead in the sense they need to explain– firstly, get the resource,
explain all their business processes, the roles. And it takes time to
actually deliver it to the pilot plant workers. And so Scott Linnell, who’s
here with us today, the author and the person responsible
for this app script, actually is very much
like some of the knowledge workers in our organization. He saw this problem. And he’s not a
computer science major. He comes from the life
sciences background. He was an intern at
Genentech in 2017, and just dabbled in Apps Script. And along with another
intern, and then later on, as a full-time employee, took
this on and built the app script to address this need. And I think it’s a great
example of what’s possible. You don’t need to wait to
solve a business problem just because you don’t
have IT resources. And again, there’s another
example from the same team, but for a different use case. So there are these
different equipments. There are five different labs
within our manufacturing team. And they have different
equipment based on the roles that the people have. And earlier, it used to be a
very tedious manual process. People would go to the
equipment, sign up on a sheet, like, hey, I want to use
the equipment from 10 to 11 tomorrow– really manual process. And they actually–
this equipment can’t be booked by just anyone. So they’re booked by the role
that you have on the team. And so again, Apps
Script came super handy. Because they could
actually not only see the availability of the
equipment, book the equipment, it’ll show up on
their calendar, there would be an email
sent to remind them, hey, your equipment
is due for return now, and they could also say the
equipment is broken– they’ve used it and it’s not working. They could just schedule
a maintenance right there, through this tool. They have colleagues now,
in Germany, the same team. And they said, we would
like to use this tool, too. And so they’ve localized
that same app script and used it for their German
colleagues and counterparts. Again, a great example
of how empowering your organization and your
knowledge workers to use what’s at their fingertips today. And we’re really proud of
the work that Scott is doing. He even, in fact, ran Apps
Script training for his team there, to help them build more. I want to leave you with
some best practices. Obviously, we’re
not a small company. So some of our best practices
are really centered around how we can scale and support
a very large organization. And the first one is
the enterprise strategy. And this is not just
about seeing technology for technology’s sake. It’s about how can we deliver
platforms and services that actually meet our
business demand. So we look at a
two- to three-year and see how are we positioning
ourselves with the cloud capabilities, with
infrastructure services, application
development services, to enable and
drive that forward? Because the business is
relying on us to do that. And the second thing we
actually really value a lot is this customer experience. So when we think of IT
services, most people just don’t like going to IT. It takes long. You have to open 10 tickets. You have to go here, go there. We try to bundle these services. So we look at what does
an application developer need when they come to us? What does a DevOps person need? What do these
researchers need when they want to quickly
spin up applications? And so we look at how people
are consuming our service, what are they telling about it,
what is their feedback, where can we do better, and
continuously have this cycle with them to improve it. And then the third thing
that we have to drive is the compliance within
all the products, platforms, and services we provide. And this is a proactive, close
collaboration with security, with legal, with
COREMAP, to make sure that anything we
recommend and anything we say, this is
supported by IT, it’s actually complying with Roche
data and privacy standards. So essentially we
are making sure that the heavy lifting
is already done, so that when end users go into
the application landscape, they can actually pick a product
knowing that IT has vetted it, it’s safe to use. The second piece is around
empowering the organization. And this, the first part,
business partnership is essential for us because of
how diverse and geographically dispersed we are. It’s very important to have– we call them IT
business partners. They’re basically
embedded in the business, but they understand
the IT landscape. They can connect the
dots for the business. They can point them
to the right people. They can point them, hey,
you don’t need to build this; there’s already a solution
available for this. So there is this
cross-sharing of ideas, but also solutions
on how business can solve their problem. We also make a very
concerted effort to make sure that
anything that we introduce into the organization,
there’s full transparency on the roadmap, so there
is nothing unexpected or a surprise. So we make sure that we
have our sounding boards, with our stakeholders
and customers internally. We also have user adoption
services regionally, spread across, who are
actually our channels. And they are
communicating new changes that are coming in our
pipeline to all of the users. We also run a lot of pilots. So we’re very– because we want
the organization to be prepared for change, we make
sure that, for example, whether it’s Team Drive,
or it was Hangouts Meet, or they’re a new
docs API, things like that, that are coming. If we open this, run
pilots in our test domain, give early access to developers
so that they are prepared for changes that they need
to make in their applications or in the way they work. And this actually gives us the
early access to their feedback. And we’ve been actually lucky
to have really great partnership with Google to funnel that
feedback back into the product teams so that this feedback
goes there early and often, and they actually know
what doesn’t work for us and what works for us. And the last thing
is around learning. So I think this is also
very critical, especially as technology is changing. There are new
emerging technologies coming, where our business and
our IT is actually ramping up. So we run hackathons. In fact, procurement just had a
Procure-a-thon two weeks back. This is really to say,
let’s bring our top two, three business problems here. Let’s get a team of developers,
UX, business analysts, all of us come together,
and let’s try and solve this in maybe one or two days. And this is a great
way to understand that you’re pushing the
limits of the APIs available, you’re pushing the
limits of how can we address this problem, can
we address this problem, are we too early,
should we then request more feature updates
from the product teams and come back to this later? This really gives us this cycle
of understanding and learning to be prepared to
do it in production. Part about learning is
definitely knowledge sharing. Again, we are huge or heavy
users of Google+ communities. I can tell you that a lot
of our Google+ users rely– in fact, I met Scott through
one of these communities. I just posted something on Apps
Script, and Scott responded. So there are lots
and lots of people that are connecting
with each other, sharing learnings, sharing even
their failures, like, hey, this didn’t work for me, has
anyone else tried this? And so these network communities
are ones where a lot of folks rely on them for learning
and understanding what’s going on in
the organization for specific subjects. And then centers of excellence– we have Roche experts
in specific domains. So for example G Suite app
development, API integration, we now have one on
conversational platforms. So what we do is we
look at the emerging technologies and the
business demand and say, hey, we need a set of
experts on these technologies that are ready to
jump into projects and to help the
business when they need. And so they are at hand to
advise and guide our business as need be. So we’re still
learning, obviously. This is not set in stone. We are learning and
adapting, and we are continuing to do this
to fulfill the need that– basically address what
our patients need next. And with that, I want to
hand it off to Sambit. Thank you. [APPLAUSE] SAMBIT SAMAL: All right. Thank you, Monica. What I’m going to
do is I’m going to talk about the future
of app development, some of the key trends that
at least we see and we hope that you see the same way. So a few things– so if you look at any
productivity platform, everybody provides the
standard mechanism, the same way of sending mail,
calendar, chats, writing docs, receipts, and things like that. But fundamentally, we see three
different market trends or tech trends which is going to
impact this productivity space in next five to 10 years. So what are those three? The first thing that we see
is we have, now, capability to understand the user context. What do I mean by that? So everybody has
a mobile device. So at any point in time,
systems know where you are. And depending on where
you are, the experience can be customized. So that is the context– an example of the context. The second thing that the
systems are good at today is capturing the usage pattern. So what I mean by that
is how you do your work, the systems nor how you
are doing that work. So things can be
customized as per that. For example, if you’re
always offlining something, the systems can know. And based on how and
when you are doing it, we can take actions on that. And the third thing
that happens is, when you go to a
new organization, the way to learn about that
particular organization is you go and ask people. The knowledge in
the organization is there in people’s heads. It’s sort of the
tribal knowledge. Wouldn’t it be better for you
to know in a systemic way? There are some people who
have tried this using sort of structured data analysis. But given the fact that
today we have this knowledge scattered across different
chat exchanges, different email exchanges, different docs, a
way to synthesize that knowledge will become important. And that’s what we’re
calling enterprise knowledge. Using these three,
you can potentially categorize the experiences
that are going to come into three broad categories. I’ve called this as
assistive experience, knowledge visibility,
and process automation. Let’s look at each of these. So this will give you an idea
of what I’m talking about. So if you drive any new
car today, what you can see is there is blind spot
detection in most of the cars. What is that doing? It’s helping you drive better. It’s providing an assistive
capability on driving. You can see the same pattern
emerging in software. So if you look at a chat, and
the moment some chat comes in, it suggests to you some
option based on the context. And why does that help you? Especially on a
mobile device, it helps you give a response
which is relevant. So that is assisting
you in responding. You can see that
if you have used Gmail auto-compose– the
same kind of mechanism. The opportunity here is bring
that to the developer platform so that you can use that
or the knowledge workers can use that to build
assistive experiences. The next thing I’m
going to talk about is this whole idea of
enterprise knowledge. Now, with the
structured data, you can go to your analytics
system and know, for example, who the best customer
is, and is he being spoken to by the best
customer service representative in your organization. Who is the expert in
a particular area? But with enterprise knowledge,
it will be possible for you to, without having any
structured analysis, know who is the expert
and who do we reach out to if we need some help, be
it usual things like 401(k) or anything of that sort. So think about it. When an average worker
spends 20% of the time– if you say that instead
of working for five days, you’re walking for
four days, that’s 20%. Or you can use that day
to do your 20% project. Whichever way you look at
it, that’ll help you do that. The third thing I’m going
to talk about is automation. This use case, all
of us go through. We want to have a discussion,
and we want to have a chat. And what happens
is, before we know, five or 10 email
chats gets exchanged before we set up a meeting. The system recognizes that. So let’s do some
time slots by looking at your calendar
and your ability. And you click– just one click–
and the meeting is set up. Not only that, based
on conversation, maybe it can set up
the agenda, figure out which are there the
documents that are important, and attaches that to
the Calendar invite. All those things will be
possible by automating processes and tasks. So that is the third
big trend you will see. Most of the productivity
improvement and the ensuing developer tools will
capture these three trends. Now to the final section. So what’s new in G Suite? I’m going to talk
about three things. So we are launching a
new Add-Ons platform. Add-Ons has been
there for a long time. But we are going to do
a new Add-Ons platform. What that will help you do is,
instead of driving an add-on for each of the G Suite
apps, you write it once, and it works across all
the different G Suite apps. It will have the user
context, and you can have that customized user context. It will make the
development easier, it will make the
management easier. It’s that uniform experience
across G Suite instead of per host app. The second thing that
we are announcing today is Alpha for data connectors. So what this means
is most of you, as you tried to move
your workload to cloud, you have this hybrid
scenario where you wanted the cloud to work
with your on-prem system. So with this Alpha,
what we are doing is we are integrating Sheets
with the on-prem relational Datastore you have on
your on-prem data center. This could be SQL Server,
this could Oracle, this could be MySQL. So you can have all that
data come in to Sheets and be used in
Sheets, and you can have that hybrid experience. The Third thing that I’m going to
talk about an announce today is what we’re calling G Suite
Marketplace Security Assessment Program. The GSM Marketplace, it
has more than 6,000 apps, as was talked about. It becomes very,
very challenging for people to know
which apps to rely on, which apps not rely on, and
it’s a big challenge for admin. We have partnered with some of
the industry-leading security analysts. And the publisher
of these apps, they can go and have their
apps security assessed. And if they pass the test,
we’ll send them a badge. Then that becomes easy
for the administrator to facilitate an
[INAUDIBLE] buying process. So those are the
three announcements. With that, I’ll
end this session. But your feedback
is super important. It’s a gift for us. So please provide the
feedback, and that will help us improve the system. [MUSIC PLAYING]

The social media app that’s not safe for work

Does your boss bug you on WhatsApp? For new assignments? asking about deadlines? Do you bug him? for seeking leaves and approvals? Now no more. Companies like Hero cycles, Amway India, Dunkin Donuts, Dominoes, RPG group and many others are discouraging the use of WhatsApp for official purposes. Here’s why… Companies have no control or backup of the WhatsApp chats unlike e-mail. When the employee leave, companies discontinue the official e-mail. With WhatsApp, everything is on the employees mobile phone. Also, WhatsApp is not an official mode of communication Because the official conversation is likely to stray towards the non-official coversations like gossip. Incase you lose your phone, sensitive information on WhatsApp is at the risk of being misused Still talking to your boss on WhatsApp? Share this video, and let us know.