Shared Responsibility: What This Means for You as a CISO (Cloud Next ’19)

[MUSIC PLAYING] ANDY CHANG: Hi, everybody. I’m Andy Chang, Product Manager
at Google’s Cloud Security team, and I’m
joined here by Dan. DANIEL HYMEL: I’m Dan Hymel. I’m with Capital One and I
perform in our cloud governance and compliance space for our
multi-cloud environments. ANDY CHANG: And Dan and I will
be splitting the presentation duties. I’ll do the beginning part, Dan
will talk about Capital One’s experience living this in the
cloud, and then we’ll wrap up. DANIEL HYMEL: Sure. When I start, I’m going to
maybe ask you a question. Heck, I’ll ask
that question now. What’s in your wallet? [LAUGHTER] So I’ll start with,
actually, a cautionary tale. So stay tuned for that. And you’re going to see a
slide that you’re probably not used to at Google
presentations of this type. Thank you for coming. ANDY CHANG: Thank you, Dan. Cool. So you’re in SEC209,
“Shared Responsibility, What This Means for You as a CISO.” And I think, hopefully,
you’re benefiting from some of the comfiest
chairs for any of the talks, much better than the ones
in the large stage area. So as you’ve
probably– since you’ve been through a few
of these sessions, we’re also taking
questions in the Dory. And so go to your app. If you have questions that
come up as we’re talking, please enter those in and we’ll
handle them towards the end. We’ll be covering a few things. Overall, one of the questions
that I get talking to customers is what is the exact split
on the shared responsibility model? How do I understand it? How do I leverage it? How do I make sure
that the cloud provider is doing what they’re
supposed to do and that my own company is
doing what we’re supposed to do? So we’re going to talk a little
bit about the perceptions of what customers care
about in security, the understanding that
shared responsibility division, focusing on
visibility and control. And then, ultimately, focusing
on a quick architecture example. Then I’ll turn it over
to Dan to talk you through the lived
experience at Capital One, how they’ve been successful
on multiple public clouds. And then we’ll wrap up
with questions and answers. So when customers
talk to me, the things I hear that folks
care about are really that the right data is
delivered to the right customer at the right point in time
for the right purpose each and every time. And that’s some of
the core of security. The right thing and
only the right thing happens when it’s
supposed to happen. In addition, given as we all see
the current threat environment and the activity of
threat actors increasing, it’s important not just to
react to [AUDIO OUT] situations, but also to be able to
anticipate and innovate ahead of some of these threats. In addition, many of you are
in regulated environments, so it’s important
[AUDIO OUT] partner to be able to essentially
enable you to deliver on your responsibilities
while running your businesses in the public cloud. And lastly, and really
the focus of this, is understanding and having
a clear understanding of responsibility model. I think we’re going to
switch over to [AUDIO OUT].. OK. So now we’re– [AUDIO OUT]. I think this is [AUDIO OUT]. SPEAKER 1: Sorry, folks. Give us one second to get
this straightened out. [SIDE CONVERSATION] ANDY CHANG: OK. All right, so this is better. Cool. All right, so we’re
going to talk about– oh, I think it’s still
cutting in and out. Technical difficulties. Cool. All right, can you
guys hear me OK? All right, we’re going
to go with the hand mic and see how this goes. Thank you. So we’re going to– the
focus of this talk is really on the understanding. Enabling you guys
to understand where that split is from a shared
responsibility standpoint. And really, ultimately,
for you as CISOs, you as folks part
of the security part of your organization, enabling
you to actually accelerate your company’s business velocity
with enabling your stakeholders to have and
understand really what is the controllable and
acceptable amount of risk. So really focusing on this area. At Google, the way we think
about the shared responsibility model is really around two key
items, security of the cloud and security in the cloud. And we’ll talk about how
we divvy those things up in a moment. From a core principle
standpoint, security at Google is done defense in depth. We have at least two layers
of protections that are independent from each other
between anything of interest– anything you want to protect
and any kind of bad activity or threat actor– at scale. So, as you can imagine, Google– I think other folks
have said we’re 25% plus of the internet
from a traffic standpoint. Things done at Google for
security or for other things have to be done fully
at scale and work at scale, and by default. How
do we enable our developers, how do we enable the system
so that the controls that are necessary and required
are enabled by default so that folks can run? At the core underlying
this is really the use of strong cryptographic
credentials and identity. And what that means
is that whether it’s a machine, whether it’s
a human, whether it’s the data, whether it’s
the code, whether it’s the underlying
service, they all have unique cryptographic
identities that we can compare against what
should be happening. And provenance, the ability to
establish, through a hardware root of trust, that the
underlying hardware, the low-level software, the OS
software, the applications that are running are all fully
attested to at each stage and that we’re running the
right code at the right time. The other part of this
is that, at Google, we think trust isn’t gained just
through technology, but also through transparency. And really, what that means to
you and to customers in general is thinking about reducing what
we call the unverifiable trust surface. What that means is minimizing
the amount you actually have to trust us as
a cloud provider. Having you being provided
enough information so you can make a
decision of your own to verify the claims
that we’re making. And that’s a core part of
what we’re trying to do. Ultimately, we’re interested
in providing customers the capabilities to help you
build secure applications and fulfill your part of the
shared responsibility model. When we think about the
controls that are in place, we really think about
them in three areas. Underlying control,
things that really help you protect the data;
visibility, the ability to classify the data and monitor
the actions on that data; and then detection
response, the ability to actually validate
the controls running and detect essentially things
that are non-compliant, activities that could be
threats, or from bad actors. And for each of these
areas, we have technology that we provide to enable that. When we think about the
shared responsibility model, it’s important to understand
that the boundaries of that shared
responsibility vary based on which types of
services you’re using. And for many of our
customers, they’re using a spectrum
of services, which is why it’s not
surprising that, as a CISO or as part of the security team,
the details for what service and which part can add
to additional complexity. And so it’s important
to understand where those divisions are. When you’re running on premise– everything in the blue
represents the customer. So when you’re
running on premise, you own all of your
hardware stack, your relationships, the
underlying software running. So not surprisingly,
the full responsibility is really on you
as the customer. When you’re running as an
infrastructure as a service, which is essentially running
your own virtual machines on our platform, but
installing your own images, setting up your own
network architectures, the responsibility for us is
to provide you the ability to log and audit the things
that are running on the system– the network, the underlying
hardware and infrastructure, the pieces that you’re
building on top of. So the services we
provide to you are secure, and then you’re responsible for
putting those things together in secure architectures. When you’re running a
platform as a service– like BigQuery or App Engine,
one of those things– now, really, you’re providing
as code that you’re running. And we’re responsible for
delivering the underlying services that execute that
code in a safe and secure way. And when you’re using
software that we’ve provided, now we’re responsible,
as well, for the code. So, if you’re using
Gmail or Drive or Docs, that code has then become
part of our responsibility. And then the thing
that you focus on is access to those
applications, access to the data that you provide,
and then the data that you’re loading
into those applications. That’s a quick summary
of the different levels, and we’ll go into some
of this in more detail. So, as we talked about
from a Google perspective, we think about security of
the cloud and in the cloud. Of the cloud is about
the core infrastructure that you’re then
using, and the core services to build your
products and applications for your consumers. For Google, we’re responsible
for security of the cloud, and we think about it as, again,
defense, in depth, at scale, by default. We model the
underlying architecture that you’re running on
top of in nine layers. Everything from the
hardware all the way through the underlying low-level
software, the applications, and ultimately to usage. And everything that
we do at Google is rooted in a hardware root
of trust, which is the Titan chip that you’ve heard of
through the last couple of Next conferences. It’s a purpose-built
chip that we’ve built and it sits on our compute
cards, our processor cards. And establishes, on boot,
a cryptographic identity for that piece of
hardware and verifies that all the underlying
pieces of software that are meant to boot are
doing so in the right way, that the hardware
components are correct and configured the right way. And, if any of
those things fail, that card or that
compute instance does not actually boot and
be entered into the fleet. And that allows us to
make sure, once something goes through that, that
it’s in a known good state and it can be put into
the rest of the fleet and start serving customers. That becomes the
underlying piece that drives our servers,
our storage, our network, and then ultimately
our data centers. And what that allows
us to do, then, is reduce the threat
for folks injecting either malicious hardware
or malicious software into our systems. In combination with that, as we
talked about in the beginning, it’s important for
us at Google that we have the ability for
the code that we write to define that the
right identity accesses the right machine, is authorized
by the right code accessing the right data, and it’s in
the right time and context. And that’s done through
cryptographically secure identities. So not only users
have identities. Machines and services
have identities, devices have
identities, and then code and data have identities. And checking all of those things
when we run a Google service is a core part of our
underlying platform. Here’s a stack with
a little more detail of the different
pieces within Google that come into play
to secure each layer. You’ll see that there’s at least
two items at each layer that are put together to provide
redundancy and really defense in depth. Whether it’s the purpose-built
underlying infrastructure or the fact that we,
from a boot standpoint, cryptographically sign all the
pieces of our boot software, from an operating system
and hypervisor standpoint, we use our own version of
KVM where we stripped out a lot of parts of
the virtual machine monitor, reduced its attack
surface and its risk surface. We’ve also then added
sandboxing in so that further creates a level of isolation. From an OS standpoint, we
use our own curated operating system for our host side. We also make that available
as the container-optimized OS for you, as a customer, to use. If you’re using our
container-optimized OS, you can also enable
automatic updates, which will patch the
container-optimized OS so you can leverage and benefit from
the same type of security controls we run
on our host side. From a storage and
network standpoint, unique to cloud providers,
all services, all data, is encrypted at rest and
in transit at Google. The logging of both internal
Googler access, as well as providing audit logs for
your users and your systems and what they do
in your systems. An identity access
management system that allows us to do
fine-grain permissions, and then really
managing those keys that are core part of
encryption at scale. From a networking side, we
have one of the world’s largest private networks. What that means is that from
when your data or your service is touched by a customer
from outside our network, it basically travels wholly
on Google-owned private networking, private fiber,
to the server in our systems. And that gives you not only
performance advantages, but gives you a single threat
to choke from a network security standpoint. There are no additional
actors in the underlying hops. From an application standpoint,
both from the whole software lifecycle, from static analysis
to the patching and checking of the packages that are loaded
in, we have control over that. We also cryptographically
sign each piece of software as it goes through
the development stage, and then check before it’s
deployed that the manifest has all the right policies applied. That’s something
we’ve externalized to customers as a part
of binary authorization. So, if you want to adopt
that same model that you can run vulnerability
checks, various types of patching checks, static
code analysis, and then as long as code passes that
created cryptographic signature and have that checked
before the code is deployed, you’re able to do that through
our binary authorization product. Once the applications
are up and running, they’re protected by
our own global front end and the WAF capabilities and
DDoS capabilities around that. And then we provide, as well,
our own security scanning for L7 web application
vulnerabilities that we use inside of Google as
a product called Cloud Security Scanner for you to
use as a customer. When things are
deployed at scale, we’re very fond of saying that
your first users are typically abusers. So whenever we deploy a service,
we see a lot of attack traffic. Therefore, the built-in
DDoS for our own services, the ability to be our own CA. So it’s very hard for folks
to forge anything related to connecting to Google. And that all services provided
at Google are at full TLS. Finally, from an
operations standpoint, not just trusting us,
but having third parties do compliance checking. The ability to do
live migration, which enables us to do patching
of live running VMs without taking your
services down, allows us to, on a continuous
basis, do that level of patching and keeping
things up to the highest level of security. Then we have full SOC threat
analysis and the ability, from a user standpoint,
to connect to our services through beyond corporate
digital trust type network, as well as for them to do
hardware-based second factor for security key. And that is the structure for
every service created at Google and used at Google. And the tools in
which you would then build on top of when you
use one of our services. When we think about
enabling customers, we do this providing
the same type of services either in the
dark green as products you can consume, in the light
green as either products that you can use from Google or
that you can get from partners, or in the case of the dark
blue, core things we still do by default for you when
you’re running on Google Cloud Platform. We’re going to
talk and highlight some of the key differentiators
on Google Cloud Platform that will help you do your
job better and fulfill your part of the
shared responsibility from a visibility
and control side. So one of the core
things which we announced to general availability
this morning is the Cloud Security
Command Center. That’s that central
pane of glass that you can bring in detections
from Google native products, third-party products, or
ones you’ve written yourself for vulnerabilities and
threats all in one place, and see those in
context with your assets and the business context
of those services and the sensitivity of the data. Gives you one place to look for
visibility, one thing for you to then query, understand your
data, and then the ability to trigger prevention,
detection, and action. In addition, what we feel is
a key part of understanding the attack surface
is understanding where your data is and what
sensitivity level [INAUDIBLE].. What we provide is the same
service we used within Google for data classification,
our cloud DLP API, which is designed to work at data at
exabyte and petabyte scales, which allows you then
to, number one, for data sitting in Google Cloud Storage
or data sitting in Bigtable– BigQuery tables– to
classify, at scale, sensitive data either through
our built-in classifiers, the ability to use
regular expressions, or to express this
in terms of data sets that we can
train our models on. Once you’ve
identified that data, you then have the choice of
multiple de-identification options. Simple things like
substitution encryption, more intricate things like
format preserving encryption, or transformations that
allow the data still to be usable in analytics, but
still parallel serve privacy. In addition, unique to Google is
we provide Access Transparency, which means that we
provide you, in our audit logs, a notice every time a
Googler accesses your data and for what reason. What region that
Googler was from and the case number, in the case
of customer-initiated support events that triggered
that access. This is across a wide
range of Google services, and unlike some of the other
providers, not just restricted to a small set of the
provided services. Also, at Next, we’re
very excited to announce access approvals, which now
allows you to prevent Googlers from accessing data in real
time for a subset of the data. So not only will we notify
you if a Googler is trying to access data, you can
actually get the ability to say you will not allow
that and grant that access, and the Googler will no longer
have access to that data and will not start having
access to that data. The results of these are
surfaced in the Cloud Security Command Center. Also through the
audit logging APIs. One of the key parts
we talked about earlier was trust through transparency. And so part of that is
there’s a lot of things that we’ve told you
about our technology and how we do things. But you don’t have
to just trust us. We go through, twice a year,
a set of certifications. You’ll see some of these here. And those certifications
go on an ongoing basis. Working with your sales teams
and with your account teams, you can get access to some of
the reports of these things. So you can see for
yourself, whether it’s overall global standards or
country-specific standards, how we do against
the requirements and certifications you need
to run your businesses. When we move to control, one
of the key things that’s unique to Google is an emphasis on
providing out-of-the-box, top-down, logically central, but
globally distributed controls. We were the first of
the cloud providers to provide organization level
viewpoint, a top-down resource hierarchy that also
is an IAM boundary and also is a network boundary
so that you have the ability to start your systems and
your developers in a safe mode where they have, by
design, restricted access that you then can
explicitly only grant through the IAM roles. From an IAM standpoint, we have
over 300 curated roles so out of the box, the
separation of duties are built in by those
products requiring explicitly grants for folks to use that. We have hierarchy
and inheritance in our resource hierarchy. So if you’re given
something at a folder level, that person then can have
the underlying roles in each of the projects below that. But if you’re at
a project level, you can’t go up the chain
unless you’re explicitly granted that type of access. In addition, we
have org policies that you can define
at the top level that can be enforced across
all your organization, regardless of the
underlying pieces. And then we’re going to talk
about two other things that affect your
communication pattern. VPC Service Controls,
which provide you the ability to define
service perimeters. So unlike the traditional
L3, L4 networks, which are based
on routes and IPs, and restrictions based on IPs,
as you move to microservices, services then have
unique identities and you have the ability
to block services access to particular types of
sensitive data or projects. And those can be applied
to Google services as well so that you can block,
for example, Google Cloud Storage rights or reads from
a particular set of resources only unless those
services either belong to a particular access level
or have a particular set of permission groups. In addition, we also
provide a similar set of capabilities around your
L3, L4 level networking. So we have the ability
for you to create what’s called a shared VPC
and, in that shared VPC, define all the
firewall rules that apply to all the tenant VPCs. So you have, again,
one logical choke point where you can provide
the administration of your overall network,
separate and independent from the underlying
administrators of the underlying projects. Both of these
concepts are designed to provide you
logical bottlenecks and choke points for
control, but not cost you anything on the performance
side because they’re implemented in globally distributed ways. And those are unique to Google. VPC Service Controls,
as I talked about, is really defining
service-level perimeters, which allow you to put that
around specific sets of sensitive data, allow
you to define access levels. And those access
levels allow you to put policy
requirements, for example, of what kind of geographies that
the data can be accessed from, what kind of restricted
IPs, what potential device characteristics are needed
to access that data. Those access context
levels apply not only to infrastructure
controls like this, but also back into your G
Suite access to Docs and Gmail. So in one construct,
access levels, you can define a set
of controls that apply across both G Suite and GCP. Similarly, from a key
management standpoint, as I talked about,
by default, your data is encrypted at
rest and in transit. But we know for some
of our customers, they have additional either
regular requirements or views of their threat
model or risk that require greater
control of their keys, so we provide a full spectrum. From the left,
default encryption, you don’t have to do anything. Your data is encrypted at rest. Cloud key management
system, which allows you full control
of creation of keys, destruction of keys,
rotation, times, and periods. Full logging. IAM roles on those keys
so you can demonstrate to your regulators
and auditors that you have full control of the keys. Cloud HSM, which is a
hardware-backed solution so that you can have a root
of trust for those KMS keys be in hardware. And it’s a Phipps Level 3
hardware, globally distributed Cloud HCM system. You can also, if you need
to have the root keys be sourced within
your own org, you can use cloud’s essentially
customer-supplied encryption keys, which allows you to push
the encryption keys to us when you need something decrypted. We do not store that key. It stays only in memory
for the live operation and, therefore, you
have not only the root of trust in your
own hardware system, we have no access to
those underlying keys. You can also have, essentially,
the ability to stack your own HSMs in our Colos if
you want that further level, additional control on
essentially customer-supplied encryption keys. So depending on the risk
parameters of your business, you have a full
spectrum of capabilities available to support
data encryption. And before I turn
it over to Dan, I’m going to talk through now
how the various controls get put together when you think
about a secure architecture. So first, most customers
start with a clean slate. They have an existing on premise
or other cloud installation. They have identities set up,
typically not a Google identity out of the box, from an
enterprise standpoint, local compute,
gateways, and then data. So the first thing to do
is to essentially connect to cloud identity, which lets us
federate your existing identity service. And now brings those
identities to be used in Google Cloud Platform. Then, through the groups and
the underlying individuals, assign out the IAM
roles and policies which you want so that folks
have the correct separation of duties. The ability to establish
upfront org-level policies that apply, regardless
of the underlying pieces, across your whole organization. For example, that VMs cannot
have publicly-facing internet addresses. Then the ability
to define, again, from a L3, L4 level, and
overall control of your firewall rules and networking
through shared VPCs. Breaking out from an HA
standpoint and a disaster recovery standpoint, the
various regional pieces that you want to allocate within
each of the regions, the zones, the resource allocation, and
then the underlying subnet connections. By default, and
unique to Google, our networks are global. So you’re able to make
those connections very straightforward. You don’t have to pair
between the different regions. Our VPCs are global by
nature, and you’re able to, therefore, set up HA and DR
in a very straightforward way. Next thing you want
to do ultimately is you have data
sitting near on prem, and you want to bring
it securely into GCP or take the data that we compute
and bring it back to your on prem. So combining Cloud
Router, firewall rules, dedicated secure interconnect,
either through our own peering or through partner
peering, allows you to bring the data in and
out of your area securely. Also, we have VPNs to allow
you to do that as well. Next piece, then, is from
an application standpoint. Protect your application
once you serve it from the typical
attacks like DDoS. So you can either
use our default global load balancer,
which as long as you put that in front of one
of your GCE services, will take advantage of
our global front end. You can use one of our
managed services, which has DDoS built in, or you
can use the Cloud Armor product, which gives you
additional availability and the ability to write
your own rules to fine-grain manage your DDoS and WAF. Next piece, then, is you
want to turn on logging so that you have the monitoring
and evidence of things that might happen to
allow us to do detections. And audit logs are
turned on by default. You can then also look at
VPC Flow Logs, firewall logs, and then general logging. Then you want to turn on
the monitoring, learning, and detection pieces, which
include the Cloud Security Command Center, security
health analytics, the ability to look at stackdriver
monitoring, running security scanner
on your, essentially, web applications. And then if you want to
dump the data into BigQuery for additional processing. You then create a
service project, which allows you
to kind of create that boundary of control for
the services you want to run. And then create kind of a secure
perimeter using VP service controls around that so that you
can restrict which services can access that underlying data. And then, ultimately, bring
those data back into GCP through either private access,
Cloud DNS, and cloud natting. So you can either place
your subnets into GCP or push our subnet pieces out. So, in summary,
those are the steps that you take to pull things
in and are the building blocks of you building
a secure service on GCP, and some of the
pieces we give you to fulfill your part of the
shared responsibility model. So with that, I’m
going to turn it over to Dan to talk to you about
Capital One’s journey. DANIEL HYMEL: Great. Thank you. Are we using this mic? ANDY CHANG: No, we can
use your mic for this. DANIEL HYMEL: OK, great. Andy, thank you for
opening the stage and allowing me to
present and open the windows on what’s actually
going on inside of Capital One. So I promised you
a cautionary tale. And this little
creature’s name is Altria. Altria is a rescue dog that my
12-year-old daughter picked up from the Richmond Animal
League in Richmond, Virginia. What does this dog
have anything to do with the topic of shared
responsibilities, compliance, and controls? On the cover, probably
absolutely nothing. But take a closer look. Take a look at what’s
around her neck. So here’s the tale. It happened shortly
after Google ’18. About a week after Google
’18, this little dog, Altria– love her to death. She has no malicious
intent whatsoever. But she’s a runner. She was trained
as a hunting dog. She’s an American hound. So on one particular night, this
was probably after the fifth escape from the homestead–
we live on a parcel out in the country– we kept talking
about, let’s put it in a security fence,
an invisible fence. And we kept holding off
because the price was too high. So on one particular event,
she left around 8:00 PM. She has free run of
the neighborhood. We live in this river
canyon on the James River, where about a mile on either
side, there are no fences. And you can run from
the Chesapeake Bay all the way to the mountains. On this particular day, it
was about 11:00 PM at night. We were exhausted. We couldn’t find her. We came home. And as I was planning
to go around the house to check things out, I felt
this immense pain in my leg and I jumped and wondered
what the heck happened. So I looked down and on
the corner of the eye, there was a copperhead snake. Unfortunately, that snake took
advantage of the opportunity. And as you know, later that day,
I was in the paramedics box. I was at the hospital. I was being treated for
a copperhead snake bite. So as you know,
in everyday life, there are risk
scenarios everywhere. This is a very good example. Had we actually put
in the security fence, we would’ve avoided
Altria from escaping from our private domain
out into the public. So as a result, we
implemented this control. It’s a shock caller. So as you approach
the fence, she gets a really
enlightening experience. If she’s able to go beyond that
barrier into the public domain, there are some other controls
that we’ve implemented. So what you might not
know about the rescue dogs is that when
you receive them, you receive them with a chip. So if they are
captured in public, there is a way to
identify those. We also added a collar or a tag. The tag used to have only
our home phone number, but she runs so much
that we actually had to put a cell phone number
because we would get phone calls at home when we’re
actually out in the field looking for her. So, as you see now, it has a
lot to do with cloud controls– I’m sorry– with controls,
with shared responsibilities. It’s the whole community. So let me take you into
the Capital One journey. I’m going to open the
window for you for a moment. You won’t see this often. And I’m going to tell
you and share with you a few nuggets on how
we made our journey into the cloud successful. So Capital One, today, has
a significant need for cloud because it enables us to
operate with the speed and agility required to
succeed in the digital age. Our agile sprint teams
work in two-week cycles, and according to
them, infrastructure must not be an impediment. As a result of moving to the
cloud, there are some stats, and these are real numbers. We’ve moved from a
development environment build time of three
months down to 30 minutes. New product features
which used to take a month now take two days. With unparalleled visibility
into our environments, we have an overall
improved security posture. I’m going to talk back on
security in just a moment. Our current cloud
status is this. We have over 8,000 production
applications and services. 6,000 additional
services and applications are in non-production. And nearly 100%
of our production in development and test
servers are in the cloud. Additionally,
moving to the cloud has increased our ability
to secure and manage massive volumes of
high-quality data, operating at a higher speed
with greater resiliency, faster recovery, and at an
extraordinarily lower cost. With Cloud, our
technology organization is free to do what we do best,
build digital breakthrough experiences for our customer. If you are a customer
of Capital One, feel comfortable in knowing
that we have your back and we are doing our very
best using state of the art technologies and best practices
to protect your sensitive data. To make this happen, to make
this security posture happen, we’ve implemented a methodology
around controls and compliance. A lot of organizations might
call this cloud governance or a governance function. So essentially,
that’s what it is. It’s an enterprise
function at Capital One that balances the
need of compliance of cloud controls,
objectives, and the needs of various stakeholders,
from our board of directors to executive management, to
the enterprise communities, like operations, engineering,
info security, and cyber– all of the communities
that come together to help enable our security
capabilities in the cloud. And most importantly,
our divisional customers. These are our
lines of business– our banking community, our
auto finance community, our card community. These are the application
builders and consumers of cloud services. Another way to look at
cloud control and compliance is it’s the totality set
of policies and procedures that help an enterprise
move towards its goals while minimizing
risk and conflicts. So for us, our journey here,
our goal is all in on the cloud, and that’s just
about where we are. So let’s go back to, then,
what is this cloud compliance function? What I’m going to share with
you is a best practice example that is aligned very closely
with the Cloud Security Alliance methodology. So think of this as
a box of cake mix, where you flip that
box on the backside and it gives you
all the ingredients. These are all the ingredients
to make it happen. These are the best practices. You have to include all
of these, in my opinion, in your program for
cloud compliance to have the maximum benefit
to ensure that you’re doing the appropriate
level of due diligence for your environment. I’ll go by these one
by one, and then I’ll give you some more detail
on what they actually mean. So first off, you must
document and maintain a catalog of control objectives. These objectives are partially
derived from Nest 800-53, or FedRAMP Moderate. They are also a
compilation or a derivative of your internal security
policies and procedures. Secondly, for every
single service that our cloud
providers offer, we perform a security
control assessment using a selected set
of security control objectives in that catalog. And, for now, our current
status for our catalog of about 300 control
objectives, we use about 50 of those that are directly
applicable to cloud as it relates to our
shared responsibilities. And, of those 50, 15 of
the control objectives are mandatory
requirements, meaning that, if the service provider– Google– provides
a service to us and it doesn’t meet any of
the 15, that’s a showstopper. So we’ll just tell Google,
we’re not ready yet. We need some more features
before it’s safe for us to consume that service. Next, once you’ve done
the security assessments, you move into the phase of
performing the risk analysis on the gaps that you’ve
identified relative to your control objectives. So depending on which
industry that you’re in, you may have a different
set of objectives than what Capital One has as
a financial service provider. Today, we’re using a composition
of qualitative and quantitative methodologies. We’re looking very closely at
probabilistic risk analysis, which means that there is
a probability in a risk scenario of an event happening
over an annualized time period causing a loss of
money in a particular range. So now that you’ve
identified your gaps, now comes the tough part. This is a continuous process
on risk management, control implementation, control
compliance, control exception processing. You have to implement
your controls. You have to monitor your
controls for compliance. And you have to allow
application owners to make exceptions against using
the services that might not have all the controls
or for which you’re not able to meet all of those. Not a requirement,
but the next one is the divisional
governance advisor. And what we’ve done
is we’ve assigned a technical subject
matter expert like myself to support a particular line
of business that translates everything that we’ve
talked about into terms that they consume. And most importantly,
as well, training. The associates that
perform in this function need to have background
as a subject matter expert in the cloud
provided environment. Back to shared responsibilities
component of this. Andy talked ad nauseam about
this– this is really great– so I won’t go into
any detail here. But I wanted to share with you
that we understand this well, and I’ll just move on
to the next one, which is a payout slide. This slide basically
says, for Capital One, we look at shared
responsibilities from both a platform perspective
and an application perspective. Let’s take the application
perspective first. I’m sorry, the
platform perspective. So my team working
in cloud governance, we do everything that
we talked about before. We assess the services. We ensure that they meet
our security requirements. We work with cloud
engineering teams and we provision the
services and make those available to our
consumers that leverage those services by
way of applications. So we’re accountable
for then assuring that we implement platform
identity and access management, that we
deploy the operating systems appropriately, meaning
that we provision system images. Rather than using the
cloud provider images, we provision our own so
that we appropriately harden them to meet
our internal standards. We’re accountable for what
we put into the cloud, so we take full
responsibility for that. One particular example of
a control related to this is a notion of rehydration. So, as a virtual machine ages,
it takes on vulnerabilities because the vulnerability
aging process, and so we have a requirement
to rehydrate every 60 days. If you go beyond that,
bells and whistles go off. If you go past 90,
bigger whistles go off. If you go to 120, the
canons go off and so forth. So I think you get that. So our cloud
governance function, from a centralized
perspective, is we make controls and scripts
available that either we can run on behalf of the
development community, or that we can give it to them
and they can actually run it. For example, there
might be a requirement to apply labels or tags
to all of your resources. If the development community
forgets to do that, they can run a
script to make sure that they’re compliant to that. Let’s move on then
to the application. If you build it, you own it. That’s the mantra
at Capital One. So if we offer a service
to you, you’re golden. You can use that. It’s been whitelisted. The controls are in place, but
you have some accountability. I’ll talk a bit more
about that accountability. So why are cloud
controls important? I wanted to make sure that you
had full visibility to that. For Capital One,
it’s our mission and our fiduciary
responsibility. If you take a standard document
classification or information handling matrix,
you will find that your proprietary or
confidential data is probably your most important asset and
requires a level of protection beyond those that are
company or public. So what we do and
how we’ve bifurcated this is we look at the controls
in the cloud control catalog in three particular areas. There’s technical,
operational, and management. And I’ll show you, on the next
slide, how we codify those. But most importantly, the
controls have to always be on. The windows have to
always be locked. So that means
security protocols, public accessibility, data
encryption, identity access, and PCI. There is no opportunity. There is no tolerance
for control being off. So where do cloud
controls originate? I mentioned to you a
bit earlier that they might originate from
cybersecurity frameworks, Nest, FedRAMP, and so forth. Here’s a particular small matrix
of the 14 control categories in the Nest
framework, where I’ve bifurcated those by technical,
operational, and management. So we take our Cloud
Control catalogs. We look at a service
that Google has provided. We perform that
security assessment. The diagram below, or the
colorful green, red, blue chart, is the actual
detail of the questions that we might ask. And, incidentally, I
have to say that Google has been an extraordinary
partner in this. Google has our scripts. They know what we require. They know when a service
is ready to meet the higher bar for financial
services industry, and so they are embracing that. They’re working very diligently
to say, hey, Capital One, we’ve got this new service. And they know what
our requirements are. So they know when to
actually make the ask. And finally, we complete
the service assessment. We deliver the package. Now, it’s important to
note here, under Item C, that it’s a
cross-functional team. We include governance, we
include engineering resources, cyber resources, architecture,
IAM architecture and, also, our information assurance
third-party management function, where
we fully validate. We have an auditable
trail that our regulators and our internal auditors
can go back and say, did you ask that question? Google tells a lot of good
things about their services, but we firmly believe
in trust, but verify. So this is the final slide. I’ve got about 15
seconds or so, and then we’ll save time for Q&A.
But here’s the payout, and this is what
resonates with shared responsibilities from an
application owner perspective. Here’s what it means. If you have an application– let’s take a basic, multi-tier
architecture application– where you’re going to
need a load balancer, a web server, application
server, database and file storage, that application
or the type of application may require that you adhere
to an enterprise architecture standard. There may be a defined
architecture already. The architecture might
even say, in order to meet the requirements
for the architecture, there are certain
cloud services that you are mandated to follow. So the application owner
defines the application. They choose the architecture. They select the cloud
services that are approved. This is by no
means the only list of what’s approved
at Capital One, so we could put dot dot dot. After you select the
approved services, then you become more
aware of an application owner of what your actual
requirements for controls are. So if you pick a Compute
Engine, if you pick Storage, you will have some requirements
for identity access, data encryption, and so forth. So this is really
the whole story. There’s so much detail here. Feel free to reach
out to me on LinkedIn if you have some
follow-up questions. And we’ll actually go back to
Andy to close it out for us. ANDY CHANG: Thank you, Dan. So in summary, though
one last slide. [APPLAUSE] So security of the
cloud is something the cloud provider secures. Security in the
cloud is something that the cloud customer
secures for their data and their content. At Google Cloud, we look to
empower you as a customer with the visibility to
control and secure the things that you’re responsible for. And as Dan talked
about, customers can be successful in cloud
using a very thoughtful process, structure, and clean
separations of responsibility within their own org. [MUSIC PLAYING]

Best Practices in Developing G Suite Business Apps (Cloud Next ’19)

In this session, we are going to talk about
developing applications on G Suite, and some of dos
and don’ts of developing applications and how to set
up organizations in this particular session. So my name is Satheesh. I am your host for
the session today. Along with me, my co-presenters
are Monica from Genentech, and we are going to have
Sambit from Google Cloud talk in this session as well. So we will start with
giving an overview of what kind of challenges
our enterprise customers face when it comes to developing
applications on G Suite, and how we are driving certain
industry trends with some of our product lines,
followed by talking about our specific products. And then we’ll have Monica
present how they’re organized, and what are some of the best
practices in their organization with respect to app
development on G Suite. Then we’ll talk about, what
is the future looking like, what are the key trends that we
are seeing in the industry when it comes to developing apps
in the productivity space generally, and within G Suite
as well more specifically. And then we’ll wrap
up this session with an overview of some
of the exciting features that we are announcing today
with respect to G Suite development platforms. So before we get
started, a quick note on how to submit questions. So all of you must
have the mobile app. So you will be able
to submit questions through your mobile
app on this Dory. So you can go to this
particular session details, and then submit
[INAUDIBLE] there. Towards the end, we will try
to address your questions. We’ll also be able
to– hopefully we’ll have time to take some
live questions from this room as well. Sounds good? Perfect. So, without further
delay, let’s get started. So imagine yourself to
be a salesperson, an HR professional,
financial analyst– many different roles
in an enterprise. Your key responsibility
is in driving the business with
respect to your role, with respect to your
area of expertise. When you do this,
you’re obviously using many different
applications. You’re using G Suite, plus
you’re using other enterprise applications as part of this. That means you need some
customizations in order to run your business process. You need all those apps. When it comes to
getting those apps, there are some key
challenges that the business users in an organization face. Number one challenge is
what we call the skills gap. As the business
process owner, you are the expert in
your business process. You know how your
process should run. You know how your
business is run. You know your role
better than anybody else in your organization. When you want to
get those apps, you need to go to your IT
developers, and talk to them, and educate them about
the business process. And then they have the
technical expertise to develop those applications. Do you see the problem there? So on one hand,
the business users are really proficient
with their processes. They have the right
skills for that, but they lack the
technical expertise. On the other hand, the
technical developers have the right
technical knowledge, but they don’t know all
the business processes. This is what we refer
to as a skills gap. This requires communication
back and forth in order to get the right
application that you need. This is the number
one challenge. The second challenge
is, the IT developers, now they have to work with
many different business users in the organization,
across the enterprise, understand those
business processes, and then develop the
applications for them. That leads to scaling challenges
for the IT developers. The resources become
limited as a result of that. Any organization, this
is a very common problem. The third challenge
that we see is that technology keeps evolving. Business processes
also keep evolving. And as a result of that,
it’s hard to keep pace with those changes. Everybody’s always
playing catch-up in order to stay in tune
with these changes that are happening. That leads to delays in
getting the apps that you need. That leads to getting
the updates to the apps that you’re using. That’s the number
three challenge. The net result of all
of these challenges is that the involved
stakeholders get frustrated. There are many
different stakeholders. I have identified three
stakeholders here. Number one is the business user
who needs these applications. Number two is IT developer,
the technically proficient developers. Our number three
is somebody who is tracking the cost,
and the schedules, and managing all these
programs and costs, what I call the IT director here. The business users are
frustrated because they are not able to get the
right apps that they need on time or the updates
that they need on time. The IT developers are frustrated
because their project backlog just keeps growing
as they try to work with many different
organizations in enterprise. The net result of
the IT director is that now they
are facing the cost and the scheduled [INAUDIBLE]. Those are the challenges
that everybody faces– all the stakeholders face. How are enterprises
addressing this? What are the shifts that are
happening in the industry to address this? So number one shift
that is happening is that the application
development itself is being moved to
the business users– closer to the
business users– where they have the right expertise
on the business process and they are in
the best position to find out what
is the application that they need, and
better still, actually develop these applications. This is happening on no-code
tools for the business users to develop those applications. The second shift that is
happening in this industry is that there are lot
of SaaS applications that are coming up– Software as a
Service applications. When there is a
need, it’s probably most efficient to go
and buy an application, as long as there is one
that meets the need. That has led to the growth of
the enterprise marketplaces and the growth of
the ecosystems, where ISVs and third-party
developers develop these apps and make them available to
the different businesses. The third shift is
within the enterprise IT. IT’s role itself is evolving. It’s evolving from one of being
an app developer and an app development
organization to being an enabler for development of
applications, by the businesses and by the business users. The business users in
this slide are also called the knowledge workers. So the knowledge workers
are now developing the apps, and the IT organizations are
became becoming the enablers. They’re providing
the right tools, they are providing
the right data access. They’re providing the right
guardrails– security, et cetera– to make sure
that those applications meet the organization’s needs and
comply with the organization’s policies. In Google and in G Suite,
we are at the forefront of driving these shifts. Number one, with respect
to the first shift, we are providing the
low-code and no-code tools to enable the business users
to develop these applications. Number two, we are
building this ecosystem and we are building
this marketplace where business users in
an organization can go and find
applications that they need and start deploying and
using those applications. With respect to
the third shift, we are also providing the
right tools and technologies that data administrators
need in order to ensure that all these
apps that are being built by the knowledge workers
across an organization remain secure, the enterprise’s
data remains secure, and that the IT administrators
can go on and monitor the application usage and
make sure that they’re able to whitelist the apps
that they allow the enterprise users and the business users
to install and use that. That’s how we are driving these
shifts in the G Suite Developer platform. Getting into the
specifics, I want to talk about the five products
in G Suite Developer platform, give an overview, so
that you can go out and explore further
details on this. So the number one developer tool
that we provide is Apps Script. Apps Script is a
low-code platform. How many of you here
have already heard about Apps Script. That kind of shows how
popular Apps Script is. You can actually
see that there are over three billion weekly
executions on Apps Script. So it’s a low-code
developer platform, and it enables the business
users to quickly build apps. How so? Because it provides
an integrated document environment. It provides APIs for
all the G Suite apps. It also provides security,
in terms of OAuth, et cetera. And it provides an integrated
runtime environment, so that when you’re
building the app, you don’t have to look
elsewhere to think about where you’re going to run that app. So it has that integrated
runtime serverless environment that you can use to go ahead
and run the application. From a best practices
perspective, if you have an
application that is going to be used by, let’s
say, a few hundred users, Apps Script provides the
perfect platform to get started. Apps Script still
requires some coding and some proficiency in coding. This is for what we call
the advanced knowledge workers, or citizen developers. Let’s say you build an
app and it becomes very popular in your organization. Now it needs to be used by,
let’s say, a few thousand users as opposed to a
few hundred users. That’s the time when,
as a business user, you talk to the IT
organization and IT developers, figure out how to
scale up application. And also, potentially, you need
new features– maybe some email or AI capabilities, maybe some
data analytics capabilities. That’s when you can use
Google Cloud Platform, and scale that application,
and build the new features. The second tool that I want
to talk about is App Maker. App Maker is intended to
be a no-code platform, a no-code application
development tool. This is for pure business users,
knowledge workers who cannot code. At this point in
time, App Maker is great for building simple cloud
applications with the data that you are already using
for your business users. If you want something that
is a bit more advanced– if you want more
advanced customization– you can use Apps Script
to customize your app that is built using Apps Maker. So from a best
practices perspective, if you are a business user, you
will start building the app– as long as it’s a simple
cloud application, you should be able
to build with all the visual drag-and-drop tools,
as is illustrated on the slide here. If you want more
customizations, you would go to the IT
department and try to seek some of
their help in order to customize the application. The third tool that
I’m going to talk about is the G Suite Add-Ons. I literally know of
nobody who is just using one or two applications
in their business, right? They’re always using a
suite of applications. For example, if
you’re a salesperson, you’re using G Suite, Gmail,
Events, Calendar, et cetera. But chances are very
high that you are also using a Salesforce or a dynamic
CRM along with this G Suite. So Add-Ons provides the right
tools and the framework for you to get an integrated experience
with third-party apps. That’s what Add-Ons
is intended to do. It provides an
integrated experience. It also provides a
development environment in order to build those
Add-Ons so that they we use multiple applications along
with G Suite, in conjunction with G Suite, in an
integrated experience. So from a best
practices perspective, you’d look for these add-ons– to start with, on the G
Suite marketplace, where the chances are that you’ll
be able to find the right add-on that you need. Otherwise, this is not a
development tool intended for the knowledge workers. Rather it is a framework
intended for use by the knowledge workers. So from a development point
of view, you have to go to IT and ask them to
develop an add-on and make it available
to you, and potentially many of your colleagues
in the organization. The next one is the
G Suite Marketplace. There are over 6,000
ISV applications– both web applications,
productivity tools, add-ons, that are available on
G Suite Marketplace. So if you’re looking to
solve a particular problem, this is probably the
best place to start with. Look to see whether
there is already an application that’s available,
and use that application. And if not, then
you’ll have to look at how to build something in
collaboration with your IT. So if you are in
the IT organization, from a best practice
perspective, you should talk to
them about the app that you need so that, in IT,
you can actually make sure that the application
that you want to make available to the rest
of your organization is secure, it meets all your needs. And then that application can
be whitelisted, and make sure that all the other business
users can use that. The last tool that
I’m led to talk about is the Admin Console. So the Admin Console provides
a number of different tools and techniques to make sure that
the apps in your organization remains secure and the
data in your organization remains secure. With the shifts that
we talked about, now a lot of different
knowledge workers will be building
applications through their entire organization. The role of the IT
now is to become an enabler, a facilitator
for this kind of application development. As a result of that,
it’s very important that IT’s role is to keep the
security of the applications and the security of
the data, and make sure that these apps and data meet
the compliance requirements of the organization. In order to do this,
we are providing a number of different tools,
including whitelisting of the applications. Only the whitelisted
applications can be installed by the
users in your organization. Providing data access
controls via whitelisting APIs and enabling APIs, providing
some data guardrails, as well as monitoring
the app usage and ensuring that the resources
that are allocated from the app are maintained. So those are some
of the ways how we are enabling
the IT to go and be an enabler in your
organization in turn, for knowledge worker
apps to be built. This is an important area
of investment for us. We know that there is a lot
more work to be done here, and we are working on
bringing more capabilities to ensure that, as
IT organizations, you can empower your
business users to build apps and maintain the
security of those apps. So, so far, I gave
you an overview of the different
discrete developer tools and an overview of
what are the industry shifts. I want to take a moment to
summarize some of the best practices that we have learned
from many of the customers that we have spoken to. These best practices, I have
divided them by two personas. One is a knowledge worker, and
the other is IT administrator. So if you’re a
knowledge worker and you are looking to build
an app, your first step should be to identify
what kind of experience your app needs to deliver. Is it a web app that users
are going to access via URL? Is it an add-on that will
be available along with G Suite in the side panel? Or is it an automation–
automation meaning and event-drive app
that automatically does some task for
you in response to some kind of a system event? That would be the first step. The second step is to
lay out the resources that your application needs. These resources could
be, for example, certain compute resources or
certain storage resources. Or maybe you want some
access to resources such as ML models, et cetera. Depending on the
resources that you need, you can think about
what kind of platform that you want to build
your application on. The third is the data
sources and the retrieval. The data sources could
be– for instance, it could be some on-prem system
that you’re already using. If you are in IT, you
have to think about, how will I make this data
available to the knowledge workers so that they
can build our apps? The data sources could also
be some other third-party SaaS services that you
are looking at. And maybe what you want to
access is just a few data items, potentially using APIs. In some cases, you may
want a large amount of data that needs
to be analyzed by the application itself
using some kind of pretty analytics tools. So the fourth step is,
using all of these data from the first three
steps to make sure that you’re choosing the
appropriate G Suite developer tool. If you’re looking to
build a simple application with no code, you will
probably start using App Maker. If you want some customizations
that are a bit more advanced, than you would start
looking at Apps Script, and start using that particular
tool for building your app. If you are looking for much
more advanced data analytics– crunching a large amount
of data, using some email, or if you are looking for
advanced storage such as Cloud SQL, then you would be thinking
about building your app on GCP, the Google Cloud Platform. Those are the kind of things
that you should look at. So now, once you
build this app, you should also think
about how to share this app with the other
users in the organization. This could be, for
example, even IT, providing some amount
of pre-built code for the other knowledge
workers to develop apps on. Or it could be IT enabling
all the enterprise users to use this app via a private
listing on the enterprise marketplace– on the
G Suite marketplace. Or if you’re even looking– if you are a third-party
developer or an ISV, you can use the G
Suite Marketplace as your distribution
platform so that you can reach many different
enterprise customers and have them use
your application. Now, looking at it from an IT
administrator’s perspective, the best practices
are– number one is, make sure that your
business users, the knowledge workers in your
organization, are empowered and they’re aware of the tools. Make the right tools available. For example, you
may want to make App Maker available
for all your business users in the organization–
enable it for them. Or you may want to build some
community around the knowledge workers so that they can
collaborate with each other, share information
with each other, and build the
application on their own. This is important for
you as an IT organization because that reduces
the load on your– and the stress on
your organization, by moving the applications
closer to the business users. The second best practice
is to establish data access and connectivity. If you want your
knowledge workers to be able to build apps
around on-prem data, make sure that
data is available. And the third best
practices is to enforce security and governance. Now when you’re looking
at enabling your business users to install applications
from the marketplace, make sure that they’re secure. If you are enabling
your knowledge workers to build those
applications, then make sure that
those apps are also secured before they are widely
used in your organization. Or you may want to establish
some data guardrails or some quartiles in
the compute to make sure that those apps comply
with those limitations that you enforce. So those are some of the best
practices that we have learned from many different customers. So at this point, I would
like to invite Monica onstage to talk about application
development, and Genentech, and how they’re organized. MONICA KUMAR: Thanks, Satheesh. Hi, everyone, welcome
to the session, and it’s great to be here. Thanks to Sambit and Satheesh
for inviting us here. I have a couple of my colleagues
from Roche and Genentech. So– glad we could
get a team together. So I want to start off with
talking about who we are. So maybe some of the US-based
folks may know Genentech, but Roche is basically a global
pharma company based in Basel, Switzerland, with around– you can see we have
over 100 locations worldwide, with around
95,000 employees. And Genentech, which is a
US-based biotech company, was acquired by Roche in 2009. And ever since, we have been
a member of the Roche group. Another fun fact–
I mean, it’s really a huge number– the 11 billion
Swiss francs in R&D investment. So we basically are focused
on four therapeutic areas– oncology, immunology,
neuroscience, and infectious diseases. And really, we have both
the diagnostics and pharma divisions under one roof. And this gives us the
unique opportunity to actually look at
the patient health care across the whole
spectrum, so right from prevention, diagnosis,
treatment, and then monitoring. And our mission
is really to find those unique and best solutions
to improve our patients’ lives. To really support our business
and to fulfill the mission to have the best solutions
for our patients, we are looking at,
from an IT perspective, how can we actually
simplify the landscape, empower teams with
the right tools, and also support these
new ways of working. Our business is going through
a major transformation today. And what you see on
the left hand side– and just to give
you a background, Roche migrated to
G Suite in 2013. And prior to that, because the
company had been in business for more than 20, 30
years, as some of you know, you tend to build
up on legacy applications, legacy platforms. And lots of custom solutions on
those platforms had been built. So we had sort of a messy
application landscape. And we also have– ever since we’ve
moved to the cloud, we also got these
third-party apps that were sort of
confusing our end users– when do I use this versus that? Microsoft was embedded
in the organization before we moved to G Suite. So a lot of the
questions is, when do I use SharePoint versus
Team Drive or Sites? And so our leadership looked
at this last year, and we said, there is a certain power
in offering our end users a default. And that
default is actually G Suite. So we believe G Suite offers
the right capabilities to make our end users as
productive as possible. But along with G Suite– So the G Suite ++, is really
about these third-party apps that we have also. So we use Smartsheet,
Box, Trello. All of these apps actually
add to that experience, they enhance, and
they meet the gaps that we have just in the
basic Collaboration Suite. So how are we organized to
support this very large, very complex organization? We have a global
IT team oversees that looks at where
is the business going, and what are the enterprise
solutions we need to provide our customers so that they
are not waiting for this and having to do all
this work on themselves. So for example, we are focused
on personalized health care, in the Roche science
infrastructure, ERP, and many cloud capabilities,
even around automation. So that’s something
that global IT provides, those platforms and tools. The functional IT is basically
embedded in the business. They actually have
the closest proximity to what’s going on in
each business division. So for example, our
business functions can be from research,
manufacturing, diagnostics, commercial. So each of these businesses have
their own individual demands, and they have their own
business-critical applications that they work with. And that team actually
sits, and delivers, and drive that global
IT strategy forward. And then, of course,
we wouldn’t be here and be able to do what
we do without hundreds of these knowledge workers
who are both developing, but they’re also
consuming these services. But they are the ones
that are actually building these solutions,
using some of the development platforms we have. And we have a wide spectrum. Given the application
landscape that we have and the complexity of
the business demand, too, we have every– low-code, to medium, to the very
complex apps, a wide spectrum there. And so in the low-code,
we have seen a lot of– because we’ve been on
G Suite for a while. We’ve seen lots and lots
of knowledge workers build app scripts for many,
many different solutions that they want. So for example, Apps Script
comes embedded within G Suite. It gives you the ability to
connect with the G Suite API. So anyone who has
curiosity to solve a problem within their
own group can just pick it up and get started. It offers the integrated
serverless runtime, and it’s no additional cost. So I think this is
something that we have seen grown very organically. We didn’t have to do a whole
lot to support the organization. This is something
people just ran with. In the medium complexity,
we have Apps Script or other web apps
that have evolved to a higher complexity,
where we are seeing the use of GCP and APIs. In fact, we
ourselves, in IT, have built lots of global solutions,
including our employee directory, which
is called Peeps. We have built that
on GCP, leveraging our identity management
systems, HR systems, bringing together the
data so that that Peeps app can be available
both on Chrome as well as on a mobile device. But I think in the last sort
year and a half, two years, we’ve seen a demand for more
intelligent, contextual apps that will reduce the friction
or the barrier of entry to use them. And these apps could be using
some of the cloud technologies like the natural language
processing, machine learning, and AI. We are actually signed
up with Dialogflow, which is part of the
Cloud AI stack on GCP. And we have about 70 digital
assistants and chatbots, either in a PoC or development stage. So there’s huge
interest and a demand from the business in this area. And again, we are
integrating with some of our big third-party systems,
like ServiceNow, Workday, and SAP as well. So today, I actually want to
talk about two use cases, both built with Apps Scripts. And both of these
actually come from our pharma technical business
operations team, which is basically manufacturing. So this team actually has two
manufacturing pilot plants here in South San Francisco. And they really
wanted to have a tool that enabled to do some
sort of workforce planning– so for both technicians
to be able to plan, like, the next weeks and what’s in
the pipeline, and for management to have oversight over
the activities happening in these plants. And so they looked
at– obviously, there are third-party
tools available. There’s a cost associated
with that as well. But given that our
Genentech processes are so customized to the molecules
and the experiments that are being run
in these plants. Just to buy an off-the-shelf
product wouldn’t work. And they could also
have gone to IT. But IT also adds to the
overhead in the sense they need to explain– firstly, get the resource,
explain all their business processes, the roles. And it takes time to
actually deliver it to the pilot plant workers. And so Scott Linnell, who’s
here with us today, the author and the person responsible
for this app script, actually is very much
like some of the knowledge workers in our organization. He saw this problem. And he’s not a
computer science major. He comes from the life
sciences background. He was an intern at
Genentech in 2017, and just dabbled in Apps Script. And along with another
intern, and then later on, as a full-time employee, took
this on and built the app script to address this need. And I think it’s a great
example of what’s possible. You don’t need to wait to
solve a business problem just because you don’t
have IT resources. And again, there’s another
example from the same team, but for a different use case. So there are these
different equipments. There are five different labs
within our manufacturing team. And they have different
equipment based on the roles that the people have. And earlier, it used to be a
very tedious manual process. People would go to the
equipment, sign up on a sheet, like, hey, I want to use
the equipment from 10 to 11 tomorrow– really manual process. And they actually–
this equipment can’t be booked by just anyone. So they’re booked by the role
that you have on the team. And so again, Apps
Script came super handy. Because they could
actually not only see the availability of the
equipment, book the equipment, it’ll show up on
their calendar, there would be an email
sent to remind them, hey, your equipment
is due for return now, and they could also say the
equipment is broken– they’ve used it and it’s not working. They could just schedule
a maintenance right there, through this tool. They have colleagues now,
in Germany, the same team. And they said, we would
like to use this tool, too. And so they’ve localized
that same app script and used it for their German
colleagues and counterparts. Again, a great example
of how empowering your organization and your
knowledge workers to use what’s at their fingertips today. And we’re really proud of
the work that Scott is doing. He even, in fact, ran Apps
Script training for his team there, to help them build more. I want to leave you with
some best practices. Obviously, we’re
not a small company. So some of our best practices
are really centered around how we can scale and support
a very large organization. And the first one is
the enterprise strategy. And this is not just
about seeing technology for technology’s sake. It’s about how can we deliver
platforms and services that actually meet our
business demand. So we look at a
two- to three-year and see how are we positioning
ourselves with the cloud capabilities, with
infrastructure services, application
development services, to enable and
drive that forward? Because the business is
relying on us to do that. And the second thing we
actually really value a lot is this customer experience. So when we think of IT
services, most people just don’t like going to IT. It takes long. You have to open 10 tickets. You have to go here, go there. We try to bundle these services. So we look at what does
an application developer need when they come to us? What does a DevOps person need? What do these
researchers need when they want to quickly
spin up applications? And so we look at how people
are consuming our service, what are they telling about it,
what is their feedback, where can we do better, and
continuously have this cycle with them to improve it. And then the third thing
that we have to drive is the compliance within
all the products, platforms, and services we provide. And this is a proactive, close
collaboration with security, with legal, with
COREMAP, to make sure that anything we
recommend and anything we say, this is
supported by IT, it’s actually complying with Roche
data and privacy standards. So essentially we
are making sure that the heavy lifting
is already done, so that when end users go into
the application landscape, they can actually pick a product
knowing that IT has vetted it, it’s safe to use. The second piece is around
empowering the organization. And this, the first part,
business partnership is essential for us because of
how diverse and geographically dispersed we are. It’s very important to have– we call them IT
business partners. They’re basically
embedded in the business, but they understand
the IT landscape. They can connect the
dots for the business. They can point them
to the right people. They can point them, hey,
you don’t need to build this; there’s already a solution
available for this. So there is this
cross-sharing of ideas, but also solutions
on how business can solve their problem. We also make a very
concerted effort to make sure that
anything that we introduce into the organization,
there’s full transparency on the roadmap, so there
is nothing unexpected or a surprise. So we make sure that we
have our sounding boards, with our stakeholders
and customers internally. We also have user adoption
services regionally, spread across, who are
actually our channels. And they are
communicating new changes that are coming in our
pipeline to all of the users. We also run a lot of pilots. So we’re very– because we want
the organization to be prepared for change, we make
sure that, for example, whether it’s Team Drive,
or it was Hangouts Meet, or they’re a new
docs API, things like that, that are coming. If we open this, run
pilots in our test domain, give early access to developers
so that they are prepared for changes that they need
to make in their applications or in the way they work. And this actually gives us the
early access to their feedback. And we’ve been actually lucky
to have really great partnership with Google to funnel that
feedback back into the product teams so that this feedback
goes there early and often, and they actually know
what doesn’t work for us and what works for us. And the last thing
is around learning. So I think this is also
very critical, especially as technology is changing. There are new
emerging technologies coming, where our business and
our IT is actually ramping up. So we run hackathons. In fact, procurement just had a
Procure-a-thon two weeks back. This is really to say,
let’s bring our top two, three business problems here. Let’s get a team of developers,
UX, business analysts, all of us come together,
and let’s try and solve this in maybe one or two days. And this is a great
way to understand that you’re pushing the
limits of the APIs available, you’re pushing the
limits of how can we address this problem, can
we address this problem, are we too early,
should we then request more feature updates
from the product teams and come back to this later? This really gives us this cycle
of understanding and learning to be prepared to
do it in production. Part about learning is
definitely knowledge sharing. Again, we are huge or heavy
users of Google+ communities. I can tell you that a lot
of our Google+ users rely– in fact, I met Scott through
one of these communities. I just posted something on Apps
Script, and Scott responded. So there are lots
and lots of people that are connecting
with each other, sharing learnings, sharing even
their failures, like, hey, this didn’t work for me, has
anyone else tried this? And so these network communities
are ones where a lot of folks rely on them for learning
and understanding what’s going on in
the organization for specific subjects. And then centers of excellence– we have Roche experts
in specific domains. So for example G Suite app
development, API integration, we now have one on
conversational platforms. So what we do is we
look at the emerging technologies and the
business demand and say, hey, we need a set of
experts on these technologies that are ready to
jump into projects and to help the
business when they need. And so they are at hand to
advise and guide our business as need be. So we’re still
learning, obviously. This is not set in stone. We are learning and
adapting, and we are continuing to do this
to fulfill the need that– basically address what
our patients need next. And with that, I want to
hand it off to Sambit. Thank you. [APPLAUSE] SAMBIT SAMAL: All right. Thank you, Monica. What I’m going to
do is I’m going to talk about the future
of app development, some of the key trends that
at least we see and we hope that you see the same way. So a few things– so if you look at any
productivity platform, everybody provides the
standard mechanism, the same way of sending mail,
calendar, chats, writing docs, receipts, and things like that. But fundamentally, we see three
different market trends or tech trends which is going to
impact this productivity space in next five to 10 years. So what are those three? The first thing that we see
is we have, now, capability to understand the user context. What do I mean by that? So everybody has
a mobile device. So at any point in time,
systems know where you are. And depending on where
you are, the experience can be customized. So that is the context– an example of the context. The second thing that the
systems are good at today is capturing the usage pattern. So what I mean by that
is how you do your work, the systems nor how you
are doing that work. So things can be
customized as per that. For example, if you’re
always offlining something, the systems can know. And based on how and
when you are doing it, we can take actions on that. And the third thing
that happens is, when you go to a
new organization, the way to learn about that
particular organization is you go and ask people. The knowledge in
the organization is there in people’s heads. It’s sort of the
tribal knowledge. Wouldn’t it be better for you
to know in a systemic way? There are some people who
have tried this using sort of structured data analysis. But given the fact that
today we have this knowledge scattered across different
chat exchanges, different email exchanges, different docs, a
way to synthesize that knowledge will become important. And that’s what we’re
calling enterprise knowledge. Using these three,
you can potentially categorize the experiences
that are going to come into three broad categories. I’ve called this as
assistive experience, knowledge visibility,
and process automation. Let’s look at each of these. So this will give you an idea
of what I’m talking about. So if you drive any new
car today, what you can see is there is blind spot
detection in most of the cars. What is that doing? It’s helping you drive better. It’s providing an assistive
capability on driving. You can see the same pattern
emerging in software. So if you look at a chat, and
the moment some chat comes in, it suggests to you some
option based on the context. And why does that help you? Especially on a
mobile device, it helps you give a response
which is relevant. So that is assisting
you in responding. You can see that
if you have used Gmail auto-compose– the
same kind of mechanism. The opportunity here is bring
that to the developer platform so that you can use that
or the knowledge workers can use that to build
assistive experiences. The next thing I’m
going to talk about is this whole idea of
enterprise knowledge. Now, with the
structured data, you can go to your analytics
system and know, for example, who the best customer
is, and is he being spoken to by the best
customer service representative in your organization. Who is the expert in
a particular area? But with enterprise knowledge,
it will be possible for you to, without having any
structured analysis, know who is the expert
and who do we reach out to if we need some help, be
it usual things like 401(k) or anything of that sort. So think about it. When an average worker
spends 20% of the time– if you say that instead
of working for five days, you’re walking for
four days, that’s 20%. Or you can use that day
to do your 20% project. Whichever way you look at
it, that’ll help you do that. The third thing I’m going
to talk about is automation. This use case, all
of us go through. We want to have a discussion,
and we want to have a chat. And what happens
is, before we know, five or 10 email
chats gets exchanged before we set up a meeting. The system recognizes that. So let’s do some
time slots by looking at your calendar
and your ability. And you click– just one click–
and the meeting is set up. Not only that, based
on conversation, maybe it can set up
the agenda, figure out which are there the
documents that are important, and attaches that to
the Calendar invite. All those things will be
possible by automating processes and tasks. So that is the third
big trend you will see. Most of the productivity
improvement and the ensuing developer tools will
capture these three trends. Now to the final section. So what’s new in G Suite? I’m going to talk
about three things. So we are launching a
new Add-Ons platform. Add-Ons has been
there for a long time. But we are going to do
a new Add-Ons platform. What that will help you do is,
instead of driving an add-on for each of the G Suite
apps, you write it once, and it works across all
the different G Suite apps. It will have the user
context, and you can have that customized user context. It will make the
development easier, it will make the
management easier. It’s that uniform experience
across G Suite instead of per host app. The second thing that
we are announcing today is Alpha for data connectors. So what this means
is most of you, as you tried to move
your workload to cloud, you have this hybrid
scenario where you wanted the cloud to work
with your on-prem system. So with this Alpha,
what we are doing is we are integrating Sheets
with the on-prem relational Datastore you have on
your on-prem data center. This could be SQL Server,
this could Oracle, this could be MySQL. So you can have all that
data come in to Sheets and be used in
Sheets, and you can have that hybrid experience. The Third thing that I’m going to
talk about an announce today is what we’re calling G Suite
Marketplace Security Assessment Program. The GSM Marketplace, it
has more than 6,000 apps, as was talked about. It becomes very,
very challenging for people to know
which apps to rely on, which apps not rely on, and
it’s a big challenge for admin. We have partnered with some of
the industry-leading security analysts. And the publisher
of these apps, they can go and have their
apps security assessed. And if they pass the test,
we’ll send them a badge. Then that becomes easy
for the administrator to facilitate an
[INAUDIBLE] buying process. So those are the
three announcements. With that, I’ll
end this session. But your feedback
is super important. It’s a gift for us. So please provide the
feedback, and that will help us improve the system. [MUSIC PLAYING]

Automating Visual Inspections in Energy and Manufacturing with AI (Cloud Next '19)

my name is Mandi 4-h and I lead the industrial AI initiative for Google cloud thank you so much for joining really delighted to have you here at Google we believe that the goal of every technology should be to enrich our lives to take our societies our collective humanity forward and do so in a responsible manner so we're constantly thinking of ways in which technology and particularly AI can help us realize this bright and promising future so we've been thinking how can we apply our advanced computer vision technology for solving some of the very hard incumbent problems in the industrial sectors and how can we make these sectors more efficient and more sustainable so in the next 50 minutes we'll be talking about how with industrial inspection AI that is powered by the auto ml vision technology can help make industrial inspections more easier faster accurate and more importantly more safer and we'll also look at how to leading companies are applying this technology to the energy and manufacturing sector so let's get started so AI hold great promise for solving some real world problems from detecting glaucoma with retinal images to processing millions or even billions of documents to understand their content to automatically moderating unsafe and inappropriate content we are applying this technology across all of these use case but we also recognize that developing this technology building these custom vision models is laborious and it's hard so we wanted to enable even the non programmers to be able to tap into the power of AI and that is precisely why we created Auto ml vision so while our standard ap eyes are a great powerhouse for pre-trained models on the massive google image datasets all ML allows you to train custom models that are specific to your industry needs to your use case needs how do we do that so in a very simple clean UI you are able to upload the images labeled images if you're looking to classify a problem or you can draw bounding boxes at we as we take a look to detect specific objects within those images once you've done that with a click of a button you've got a model trained and that model can be used to detect shark species in this case or you can use that to detect defects anomalies breakage in your specific industrial products we already seeing use cases with wind turbine degradation inspection with outages on solar panel forms or failures on electric poles and we'll be looking into some of these examples in more detail shortly at this point I want to take a moment to talk about data protection and privacy so your data sets your images are your images all of these custom trained models are used only on your use cases by you Google does not pull these images into any common deposit trees or use this across customers so your data sets your images we'll take a look at how this technology can be applied for aerial inspection in wind turbines and then an application of that on the production line in a manufacturing company but before I begin there we want to share Google's stance on the use of this technology so Google cares deeply that it's technology is used for creating a positive impact in the world and in that win Google created air principles in last June they set the standard of the application of these air technologies and we abide by these principles for any work that involves AI and similarly for the use of this technology and for this product we expect that this technology be applied in accordance to the air principle which prohibit explicitly the use of this technology for any nefarious purposes so we'll now take a look at how one of the leading energy companies in the world is applying this technology to create a brighter and greener future for us all let's take a look at global yes wind turbine inspections not the biggest you know several times now we really do have the technology to address the issue of carbon footprint greenhouse gases from the electric sector dey's corporation is one of the leaders in new technologies for renewables and energy storage it's a fortune 500 company our mission is accelerating a safer and greener energy future right now we have eight wind farms each farm has different capacity starting from 50 turbines up to 300 turrets they cover large spans of geography and land they're spread across hilltops and mountain sides all these turbines needs annual inspections originally it could take up to two weeks to do one inspection we partnered with leading drone service company measure right now with drones we can do it in two days and this is safe and quick for a wind turbine inspection we go out with our pilots and what we're looking for is cracks or defects things that may need to be prepared on a typical inspection we're coming back with 30,000 images spending four weeks reviewing images I don't think anyone's gonna argue that that the best use of a highly trained engineers time how do we speed that up and how they make it 10x more efficient that's where machine learning and AI comes in we've built a great and an solution using Google class tools and platform with the auto amount vision tool we've trained it to detect damage we're able to eliminate approximately half of the images from needing human review remaining 50% of their time can now be very focused on identifying that damage and really determining the right course of action to immediate it moving from reviewing images to training machine learning models it's a much higher order employment opportunity for people and one where we're trying to develop on our team Google cloud has been a great partner there technology's consistently among the world leaders and I'm just a great partner to work with person-to-person at the end of the day we won't reach the cleaner energy future without advanced tools like machine learning technology will allow the renewable energy to be cheaper than conventional ownership artificial intelligence robotics this is really where the future is all about please join me in welcoming Nico's born from it yes Thank You Mandy and thank you to the team that put that great video together the power industry is enormous it touches all of our lives and the impacts are felt around the world the industry investments are often quoted in the trillions of dollars the opportunities for improvement are often in the billions if not tens or even hundreds of billions of dollars the industry is also going through significant and profound change renewable energy is continuing to fall dramatically in price solar wind and battery energy storage are not just possible or practical the consumer is also driving change they are much more aware of both the opportunities and the costs associated with their energy use and the third megatrend are the new digital tools cloud AI and many others that are changing the economics of insight I'm here today to share one story where we've partnered with Google to improve lives by accelerating a safer cleaner energy future we call this our vision or aerial intelligence platform first a little bit about myself and the company I work for I'm Nick Osborne I'm the business leader focused on understanding and applying advanced analytic tools like artificial intelligence and machine learning to applied business cases jobs really quite simple I accelerate coordinate and facilitate the adoption of these new tools across the organization AES is a global power company were headquartered in India in the United States but operate in 15 countries around the world we've made a very significant commitment to reduce our carbon intensity by 70% by the year 2030 to help us achieve this we've made some very significant investments in new technologies we're the world leader in battery energy storage using lithium-ion batteries and we're also the largest owner of solar assessing and in the it states on a personal note it feels good to come home at the end of the day and know I'm working with a company that's putting its money where its mouth is to drive that change that is core to our mission applying new technologies is core to how we operate our business our drone program is considered world leading in the end of in the energy industry we developed this program by partnering with measure measure is a professional drone services organization and the measure ground control software is an enterprise caliber drone operations platform through this partnership we've improved the cost safety and performance of our inspections another consideration is that what we often hear about the threat of technology taking jobs or eliminating jobs that's clearly not the case with what we're seeing in our drone program and many other technologies that we're exploring we now have over a hundred and seventy pilots trained in our organization performing operations in over a hundred locations around the world these are employees with tremendous value for our company for their personal advancement and their broader career growth prior to drones these inspections were typically done manually so it was either someone climbing up the turbine and then rappelling down to inspect the blade or hiking around the turbine with a large telephoto lens trying to capture an angle and trying to see if they could detect damage neither of these were as effective or as efficient or as safe as what we're able to do with drones so using drones we're now able to take that partial inspection that was taking two weeks of time and do a full inspection in two days a much lower cost much higher quality and a much safer manner tremendous improvement in efficiency and velocity in our organization but there was one new workflow we're now when we do a single turbine inspection so single turbine has around 300 images when we do an entire field this means we're coming back with 30,000 or even 60,000 images this takes a lot of meticulous and detail review to complete the inspection work so we saw this as a great opportunity for artificial intelligence and this is really where our partnership with Google started to grow to understand our journey towards AI you need to understand with where where we started we started with an investment in talent we sent two classes of six people to Google's advanced to solution lab for intense training and supervised machine learning this cohort became the foundation for our work in AI internally we refer to this decision as a no regrets decision meaning that we were able to quickly move forward make this investment with little or no hesitation on our part a few keys for ROI is one is don't just send IT people to this training a lot of the value from data science in general and this program comes from the mixture of expertise and ideas that you get when you send multiple multiple types of people through the program the second piece of advice and this is maybe a bit selfish on my part is make sure you have a good commitment to work on your projects after this training we only sent high-performing individuals to the training and the risk with sending high-performing individuals is that they're going to get quickly pulled back into their day job and that's definitely something we had to work through as an organization so this investment set the groundwork to accelerate our progress that we were making as a company and is another example of where new technologies are increasing opportunities for our employees so from this foundation we got to work we went through a proof pilot production process with each step being a stage gate for further investment so starting with our proof we built a custom tensor flow model leveraging the openly available inception v3 vision model and it worked we were able to detect damage but it also showed us where our shortcomings were our data needed work and setting up the end-to-end platform was going to be difficult and we were going to need some help so in speaking with Google about our progress and our learnings we discussed the possibility of partnering on a pilot phase so in the pilot phase we are we are were labeling sorry we were using Google's data labeling service and Google's Auto mail vision tool to really accelerate our efforts and boost our efficiency and again it worked false negatives were seen as a key business risk for our organization so not detecting damage is something that we weren't willing to accept in our inspection process so using our most restrictive precision recall metrics during this pilot phase we were able to show that we could eliminate 30% of the images from needing any human review so that four-week review process was now down to three works three weeks really accelerating our velocity and our time to action time to action has really become one of those key metrics that we look at with this project so this gave us the commitment our yeah commitment and ability to move forward with our production environment so our production environment is a scalable platform for us to label images train new models and manage those models in production we're still iterating and refining on this model but we're again showing some very promising results we're now showing that we can eliminate 50% of the images from needing any human review and the remaining 50% of the images are now categorized and classified by type of damage further improving our time to action and focusing our engineers on the most important and most critical types of damage so going back to data one of the things that we had learned about early on was that our data all we had a lot of data that was not at the quality or level of consistency we needed for machine learning so working with measure we developed in nine category classification of damage this includes things like cracks gel coat damage different types of delamination and splitting as well as some non damage categories like serial numbers lightning protection points stickers and whatnot so we also worked with Google's data labeling team to iterate and walk through many many edge cases of different types of damage that are out there we started with a series of batches small in size doing a full and complete review of all the labels that were coming back but as the quality of labeling improved and our batch sizes improved we've moved towards a sample basis we also needed to develop a platform to manage the labeling effort model training prediction process working with Google we identified clear object to be a local GCP partner to help us architect and develop our platform using the latest thinking and cloud and serverless tools available from Google clear object has been a great partner and work to quickly develop this platform for us the platform leverages Auto ml for our core modeling engine cloud storage and cloud SQL for our image repository and metadata as well as cloud functions and app engine for to manage our interactions and orchestrations so now that we have this platform we're continuing to improve on the model or we're also looking to expand its use we're looking at new business cases solar transmission infrastructure and even safety as well as looking at new inspection modalities for example infrared and even lidar we're also looking at pushing the model to the edge or in this case the drone so I'm really excited to hear about what LG is going to be sharing next energy is a trillion dollar business it impacts lives in every day in every country around the world the challenge and the real-world impact are huge if you're interested in working with or for company that is improving lives by accelerating a safer cleaner energy future please come talk to me mandeep [Applause] thank you very much make for that great presentation so we saw how Auto ml version can be used for visual inspections to make them more easy faster accurate and safer in speaking with lot of experts from the industry we learned that there are some specific requirements for manufacturing use case a lot of a time this data sits on premise there's latency requirements and most of the image and data sets are in a format that requires it to be processed on the edge devices this be a mobile phone this be an edge TPU a CPU or a GPU so with our Auto ml version on edge solution you're able to take your custom trained model and then download them in an edge device and you can run those inferences from your edge devices I think you'd much rather see that in action and hear directly from a manufacturing company which has deployed these models on the production line so it's a great pleasure for me to invite mr. soon book leave from LG and share more about this initiative mr. Lee a good afternoon everyone I'm very thank you for your attention to our previous presentation my name is Tom Oakley and the vice-president of AI and picked a business unit at LG's Janice it seems there are many Isis fascists in our audience today I think if you are like me I expect we share many great hopes to apply AI tulear word assertions I also hope this short overview our collaboration with Google Auto ml will help you all in your AI work today we'll be looking at how additional send Coogler has successfully collaborated on hey I immediately commission technologies and how we have been apply our leisure to pigeon inspection systems and several manufacturing solutions let's begin with a little background over jeez Janice I think you may know the name of LG group but you don't know about it what kinds of companies in the energy group so I want to introduce some companies we have LG Electronics which produces the television and refrigerator and we have LG Display a produces world reading or LED panel and the LG Innotech produces a camera model so I think a half of you the have already the LG no text camera in your cell phone oh sorry smartphone and LG Chemical produces electric battery is another and world reading company under age group so you may know that almost all the LG group company is working in the manufacturing industry as LG sentences supplies the IT solutions for the LG group affiliates and other companies the working in the manufacturing industry we are constantly working on how to best apply a high technology to improve the manufacturing processes and we all know that it can be really challenged to use the big data and AI technology to ensure product quality on a largest scale production this is where our discussion of a Google or ml comes in today edition has started working with the Google team l in the summer of last year we started our collaboration after seeing the Google was achieving in their immediate recognition technology because we thought Google or ml could help to improve vision inspection for LG production processes and to our great satisfaction our collaboration has been a success okay before we work with Google ultramel actually we had already developed our own in-house AI system a photos of you familiar with the manufacturing process you will like clear recognize that the picture the left of the screen is the typical visual inspection system that reliance relies on the human operators while many production lines can't have a camera and IOT sensors and other detection technologies but non while the many production lines can use camera but it is still hard to find the rear defect efficiently sometimes non defective product open misjudged as defective because of minor factors like a small dust particles or low resolution images and it is still more effective rely on people to complete visual inspections and while people get better the Ridgid that monotony of a visual inspection made by workers also lead to many errors as well to solve this problem indigenous made a tradition we moved from the traditional visual inspection the left image II you can see to the AI inspection system shown on the right I'm sure many of you also working on the inspection technologies so you will be familiar the Trier and era Mossad we need to improve our system with artificial intelligence anyway within our with our in-house system we increase the accuracy and performance and even improved our process speed and efficiency it means that we could reduce our the operation costs or zone with our in-house AI system we were able to apply to over the three production lines only in age group some of these include the first picture as you can see we could improve the defect detection in LCD and LED panels and under the middle of the picture we could remove in pretties from the optical Trillium and even improving the quality control for the production so about automotive efflux can be made with our in-house AI system but even with this improvement our system wasn't working optimally because it still requires a lot of time and effort to perform well and now I will talk about a little about the downside of this system as it's often the case with the success we also ran into some obstacles as we expanded the application of our AI pigeon inspection into other area we have experienced a shortage of skilled AI developers it is very hard to hire the good AI developers for the company they're located in South Korea so it is very hard times when the one a I developers leave our company the parry impetus is so big to our company so and while we designed the AI model they need to spend a lot of time and effort to achieve high performance additionally as we develop the model using service located at the production site the compressed T of architecture has been increased so it is hard to be served so now we require the process to sentry design and this treat the model to the edgy and to centrally control the the performance of the deployed model in one integrity system collaboration with Google has been a critical to find the solution to these problems the performance of Google ultramel has been truly exciting even though our the AIS person doesn't like it one of the key areas we need to improve in our system was our productivity in terms of the moral development time as you can see in the diagram on the left our top arrow bar shows it took roughly seven days to complete our model before using ultramen but afterwards we brought that down to a mere two two hours with Google Reutimann the other area we need to improve was the accuracy of our system in addition to being faster from the diagram on the right pictured Google automates performance exceeded that of the AI experts in many times our test lizard showed the average is six percent improvement in terms of performance we can expect when using a Google chairman I think while we have made advances using Google or ml and integrating that with our visual inspection we are still facing several challenges in many cases we could not meet our clients requirement and you found that the many of them comes from the low image quality not from the model that we made with the Google so to solve this problem we listen to launch it immediate pre-processing lizards team the members of this team spend more time on exploratory data analysis and pre-processing data and try how to try hard to augment data for getting better machine learning models so they became to spend a lot of time on thinking how to the changing the inspection process itself I estimate now our members could use their time and effort for more strategy work now we are planning to expand our business into consulting services so we will provide expertise to enhanced in to enhance overall inspection processes as a one-stop solution we are hopeful that we will see the first manufacturing visual inspection area where humans and a I share areas the least panzerotti very optimally do you agree ok I would like to announce that we have built integrated AI a vision inspection architecture so our system and Google ultramel is connected seamlessly with this architecture we will be able to maximize humans capability and utilization of Google or ml this architecture starts from the data scientist past the bottom they will ensure a major quality so they will produce a clear image and will send to the Google or travel and Google Tom a text a clear image and produced a a model with efficiency and with effectiveness awesome the Morris will be completely managed with all the history data and performance status and automated learning processes with this architecture the elegiggle is now developed now can developed and managing thousands of a aia morris simultaneously in addition to vision inspection our goal is to expand the architecture to the other the manufacturing use cases to manage the whole factory equipment facilities and the safe things and so on i think you may think however many use cases we can expand this instrument architecture in the manufacturing industry to this point we have gone over how collaboration with google Tramel has improved our visual inspection systems now let's look at a to the future based on our AI integration success with within the edge group we will keep going to be positioned as leading AI visual inspection total service provider so we recover from the pre-processing area and we will cover learning the model and then we will manage all the Morris melt with Google attainment whether the cause of poor inspection quality is motion running attainment over the image quality or data labeling over the operators themselves working with Google ml we will strive to achieve our goal of 99.9 percent accuracy and the leak Lake of 0.001 percent under all conditions if you were experiencing the similar issues in your industry I hope that this session could be helpful I really appreciate your attention and thank you for listening thank you [Applause] Thank You mr. Lee so the goal that mr. Lee shared about LG is very much what we share for our product and for our roadmap as well which is to make our inferences faster our interfaces more intuitive and easier and our results more accurate within manufacturing we seeing many more use cases beyond automotive beyond electronics into the food into retail and many more categories and we are very excited to work on these new use cases with you we saw how a eye and visual inspection can be applied to the manufacturing use cases and we looked at how this can be applied for the aerial inspection use cases beyond the three use cases that we talked about on the aerial inspection side we are also exploring more work on the agriculture monitoring and construction site monitoring as of today this technology is available to use in Bera please visit slash vision to register your interest you can use the technology right away but by registering at this site we are able to partner with you and work with you on our upcoming releases and our early access program so we look forward to hearing from you thank you so much for joining us in this shared vision and we really look forward to working with you in creating a more brighter more greener and more positive future thank you very much all [Applause]