How Does a Transistor Work?

In this phone, there are nearly 100 million
transistors, in this computer there’s over a billion. The transistor is in virtually
every electronic device we use: TV’s, radios, Tamagotchis. But how does it work? Well the basic principle is actually incredibly
simple. It works just like this switch, so it controls the flow of electric current. It can be off, so you could call that the
zero state or it could be on, the one state. And this is how all of our information is
now stored and processed, in zeros and ones, little bits of electric current. But unlike
this switch, a transistor doesn’t have any moving parts. And it also doesn’t require
a human controller. Furthermore, it can be switched on and off much more quickly than
I can flick this switch. And finally, and most importantly it is incredibly tiny. Well
this is all thanks to the miracle of semiconductors or rather I should say the science of semiconductors. Pure silicon is a semiconductor, which means
it conducts electric current better than insulators but not as well as metals.
This is because an atom of silicon has four electrons in its outermost or valence shell.
This allows it to form bonds with its four nearest neighbours, Hidey ho there!
G’day Wasaaaaap!? So it forms a tetrahedral crystal. But since all these electrons are stuck in
bonds, few ever get enough energy to escape their bonds and travel through the lattice.
So having a small number of mobile charges is what makes silicon a semi-conductor. Now this wouldn’t be all that useful without
a semiconductor’s secret weapon — doping. You’ve probably heard of doping, it’s when
you inject a foreign substance in order to improve performance. Yeah it’s actually just like that, except
on the atomic level. There are two types of doping called n-type
and p-type. To make n-type semiconductor, you take pure silicon and inject a small amount
of an element with 5 valence electrons, like Phosphorous. This is useful because Phosphorous is similar
enough to silicon that it can fit into the lattice, but it brings with it an extra electron.
So this means now the semiconductor has more mobile charges and so it conducts current
better. In p-type doping, an element with only three
valence electrons is added to the lattice. Like Boron. Now this creates a ‘hole’ – a
place where there should be an electron, but there isn’t.
But this still increases the conductivity of the silicon because electrons can move
into it. Now although it is electrons that are moving,
we like to talk about the holes moving around — because there’s far fewer of them. Now
since the hole is the lack of an electron, it actually acts as a positive charge. And
this is why p-type semiconductor is actually called p-type. The p stands for positive – it’s
positive charges, these holes, which are moving and conducting the current. Now it’s a common misconception that n-type
semiconductors are negatively charged and p-type semiconductors are positively charged.
That’s not true, they are both neutral because they have the same number of electrons and
protons inside them. The n and the p actually just refer to the
sign of charge that can move within them. So in n-type, it’s negative electrons which
can move, and in p-type it’s a positive hole that moves.
But they’re both neutral! A transistor is made with both n-type and
p-type semiconductors. A common configuration has n on the ends with p in the middle. Just
like a switch a transistor has an electrical contact at each end and these are called the
source and the drain. But instead of a mechanical switch, there is a third electrical contact
called the gate, which is insulated from the semiconductor by an oxide layer. When a transistor is made, the n and p-types
don’t keep to themselves — electrons actually diffuse from the n-type, where there are more
of them into the p-type to fill the holes. This creates something called the depletion
layer. What’s been depleted? Charges that can move.
There are no more free electrons in the n-type — why? Because they’ve filled the holes in
the p-type. Now this makes the p-type negative thanks
to the added electrons. And this is important because the p-type will now repel any electrons
that try to come across from the n-type. So the depletion layer actually acts as a
barrier, preventing the flow of electric current through the transistor. So right now the transistor
is off, it’s like an open switch, it’s in the zero state. To turn it on, you have to apply a small positive
voltage to the gate. This attracts the electrons over and overcomes that repulsion from the
depletion. It actually shrinks the depletion layer so that electrons can move through and
form a conducting channel. So the transistor is now on, it’s in the one
state. This is remarkable because just by exploiting
the properties of a crystal we’ve been able to create a switch that doesn’t have any moving
parts, that can be turned on and off very quickly just with a voltage, and most importantly
it can be made tiny. Transistors today are only about 22nm wide,
which means they are only about 50 atoms across. But to keep up with Moore’s law, they’re going
to have to keep getting smaller. Moore’s Law states that every two years the number of
transistors on a chip should double. And there is a limit, as those terminals get
closer and closer together, quantum effects become more significant and electrons can
actually tunnel from one side to the other. So you may not be able to make a barrier high
enough to stop them from flowing. Now this will be a real problem for the future
of transistors, but we’ll probably only face that another ten years down the track. So
until then transistors, the way we know them, are going to keep getting better. Once you have let’s say three hundred of these
qubits, then you have like two to the three hundred classical bits.
Which is as many particles as there are in the universe.

How Google Apps Work

>>LEUNG: I’m Vivian Leung and I work at Google.
Today, we’re here to talk about Google Apps, which, if you’re not familiar with it, is
our online suite of communication and collaboration tools. I think here at Google, we’ve been
looking at ways people work with each other, communicate with each other. And we’ve found
that most of the time, we’re pretty attached to our own computers or hard drives. And being
an Internet company, we figured there must be a better way. And one of these ways is
what we call Cloud Computing. I know it sounds–it sounds a little bit geeky, but it’s really
not that complicated. Basically, all of your files and all of your information, even all
of these programs that you use are all stored, you know, online or in the cloud, as we call
it. That means you’re not attached to any single computer or hard drive. All you have
to do is be connected to the Internet and you can access these programs and files from
anywhere in the world. Let’s use an example. So, I’m in New York and my coworker is in
San Francisco and we need to work on a presentation together. In the old way, we would’ve created
a presentation, emailed as an attachment and sent it back and forth, back and forth, until
we were done. But with Google Docs for example, I can create a presentation online, share
it with my colleague, and we can actually both edit it and make changes at the same
time. And we’re working on the same actual doc, so there’s no version control and there’s
no attachments. There’s only one copy of any file that you work on. That means you can
be accessing your information and working with other people from any computer. In fact,
you can actually connect to it from any devices connected to the Internet, so, smartphones,
netbooks, laptops, you name it. And there’s nothing to download or install, you just get
online and off you go. I hope this video has been helpful, if you have any ideas on how
we can make Apps better, please let us know at this link. Thanks and happy Cloud Computing.

Neuromorphic Computing Is a Big Deal for A.I., But What Is It?

We often talk about how traditional computing
is reaching its limit–there’s a threshold we can’t move past without making some seriously
big changes to the way we structure computers. One of those exciting ways is by making physical
computers a little more like human brains. We introduced this concept in more detail
here, but a quick recap: this kind of computing is called neuromorphic computing, which means
designing and engineering computer chips that use the same physics of computation used by
our own nervous system. This is different from an artificial neural
network , which is a program run on a normal computer that mimics the logic of how a human
brain thinks. Neuromorphic computing (the hardware version)
and neural networks (the software version) can work together because as we make progress
in both fields, neuromorphic hardware will probably be the best option to run neural
networks on…but for this video, we’re going to focus on neuromorphic computing and
the really exciting strides that have been made in this field in the past year. See, traditional computers ‘think’ in
binary. Everything is either a 1 or 0, a yes or a
no. You only have two options, so the code we
use and the questions we ask these kinds of computers must be structured in a very rigid
way. Neuromorphic computing works a little more
flexibly. Instead of using an electric signal to mean
one or zero, designers of these new chips want to make their computer’s neurons talk
to each other the way biological neurons do. To do this, you need a kind of precise electric
current which flows across a synapse, or the space between neurons. Depending on the number and kind of ion, the
receiving computer neuron is activated in some way–giving you a lot more computational
options than just your basic yes and no. This ability to transmit a gradient of understanding
from neuron to neuron and to have them all working together simultaneously means that
neuromorphic chips could eventually be more energy efficient than our normal computers–especially
for really complicated tasks. To realize this exciting potential, we need
new materials because what we’re using in our computers today isn’t gonna cut it. The physical properties of something like
silicon, for example, make it hard to control the current between artificial neurons…it
just kind of bleeds all over the chip with no organization. So a new design from an MIT team uses different
materials– single-crystalline silicon and silicon germanium layered–on top of one another. Apply an electric field to this new device? You get a well-controlled flow of ions. A team in Korea is investigating other materials. They used tantalum oxide to give them precise
control over the flow of ions…AND it’s even more durable Another team in Colorado
is implementing magnets to precisely control the way the computer neurons communicate. These advances in the actual architecture
of neuromorphic systems are all working toward getting us to a place where the neurons on
these chips can ‘learn’ as they compute. Software neural networks have been able to
do this for a while, but it’s a new advancement for physical neuromorphic devices–and these
experiments are showing promising results. Another leap in performance has been made
by a team at the University of Manchester, who have taken a different approach. Their system is called SpiNNaker, which stands
for Spiking Neural Network Architecture. While other experiments look to change the
experiments we use, the Manchester team uses traditional digital parts, like cores and
routers–connecting and communicating with each other in innovative ways. UK researchers have shown that they can use
SpiNNaker to simulate the behavior of the human cortex. The hope is that a computer that behaves like
a brain will give us enough computing power to simulate something as complicated as the
brain, helping us understand diseases like Alzheimer’s. The news is that SpiNNaker has now matched
the results we’d get from a traditional supercomputer. This is huge because neural networks offer
the possibility of higher speed and more complexity for less energy cost, and with this new finding
we see that they’re edging closer to the best performance we’ve been able to achieve
so far. . Overall, we’re working toward having a better
understanding of how the brain works in the first place, improving the artificial materials
we use to mimic biological systems, and creating hardware architectures that work with and
optimize neural algorithms. Changing computer hardware to behave more
like the human brain is one of a few options we have for continuing to improve computer
performance, and to get computers to learn and adapt the way humans do. While scientists make computers that work
like brains, put your brain to use by building your very own website! is awesome, affordable, reliable,
and has all the tools you need to build a new website. They can fulfill all your website needs. They offer dot com and dot net domain names,
and intuitive website builders. They have over three hundred domain extensions
to fit your needs, from dot club to dot space, dot pizza! Take that first step in creating an identity
online and visit domain dot com. It looks like it’s gonna be a wild ride
ahead, you guys. I think you should probably subscribe to Seeker
so you can always know when something new and exciting happens as we progress along
this brain-mimicking path, and for even more on this subject, may I suggest you check out
this video on neural networks? Thanks for watching.

Cybersecurity: Crash Course Computer Science #31

Hi, I’m Carrie Anne, and welcome to CrashCourse
Computer Science! Over the last three episodes, we’ve talked
about how computers have become interconnected, allowing us to communicate near-instantly
across the globe. But, not everyone who uses these networks
is going to play by the rules, or have our best interests at heart. Just as how we have physical security like
locks, fences and police officers to minimize crime in the real world, we need cybersecurity
to minimize crime and harm in the virtual world. Computers don’t have ethics. Give them a formally specified problem and
they’ll happily pump out an answer at lightning speed. Running code that takes down a hospital’s
computer systems until a ransom is paid is no different to a computer than code that
keeps a patient’s heart beating. Like the Force, computers can be pulled to
the light side or the dark side. Cybersecurity is like the Jedi Order, trying
to bring peace and justice to the cyber-verse. INTRO The scope of cybersecurity evolves as fast
as the capabilities of computing, but we can think of it as a set of techniques to protect
the secrecy, integrity and availability of computer systems and data against threats. Let’s unpack those three goals: Secrecy, or confidentiality, means that only
authorized people should be able to access or read specific computer systems and data. Data breaches, where hackers reveal people’s
credit card information, is an attack on secrecy. Integrity means that only authorized people
should have the ability to use or modify systems and data. Hackers who learn your password and send e-mails
masquerading as you, is an integrity attack. And availability means that authorized people
should always have access to their systems and data. Think of Denial of Service Attacks, where
hackers overload a website with fake requests to make it slow or unreachable for others. That’s attacking the service’s availability. To achieve these three general goals, security
experts start with a specification of who your “enemy” is, at an abstract level,
called a threat model. This profiles attackers: their capabilities,
goals, and probable means of attack – what’s called, awesomely enough, an attack vector. Threat models let you prepare against specific
threats, rather than being overwhelmed by all the ways hackers could get to your systems
and data. And there are many, many ways. Let’s say you want to “secure” physical
access to your laptop. Your threat model is a nosy roommate. To preserve the secrecy, integrity and availability
of your laptop, you could keep it hidden in your dirty laundry hamper. But, if your threat model is a mischievous
younger sibling who knows your hiding spots, then you’ll need to do more: maybe lock
it in a safe. In other words, how a system is secured depends
heavily on who it’s being secured against. Of course, threat models are typically a bit
more formally defined than just “nosy roommate”. Often you’ll see threat models specified
in terms of technical capabilities. For example, “someone who has physical access
to your laptop along with unlimited time”. With a given threat model, security architects
need to come up with a solution that keeps a system secure – as long as certain assumptions
are met, like no one reveals their password to the attacker. There are many methods for protecting computer
systems, networks and data. A lot of security boils down to two questions: who are you, and what should you have access to? Clearly, access should be given to the right
people, but refused to the wrong people. Like, bank employees should be able to open
ATMs to restock them, but not me… because I’d take it all… all of it! That ceramic cat collection doesn’t buy
itself! So, to differentiate between right and wrong
people, we use authentication – the process by which a computer understands who it’s
interacting with. Generally, there are three types, each with
their own pros and cons: What you know. What you have. And what you are. What you know authentication is based on knowledge
of a secret that should be known only by the real user and the computer, for example, a
username and password. This is the most widely used today because
it’s the easiest to implement. But, it can be compromised if hackers guess
or otherwise come to know your secret. Some passwords are easy for humans to figure
out, like 12356 or q-w-e-r-t-y. But, there are also ones that are easy for
computers. Consider the PIN: 2580. This seems pretty difficult to guess – and
it is – for a human. But there are only ten thousand possible combinations
of 4-digit PINs. A computer can try entering 0000, then try
0001, and then 0002, all the way up to 9999… in a fraction of a second. This is called a brute force attack, because
it just tries everything. There’s nothing clever to the algorithm. Some computer systems lock you out, or have
you wait a little, after say three wrong attempts. That’s a common and reasonable strategy,
and it does make it harder for less sophisticated attackers. But think about what happens if hackers have
already taken over tens of thousands of computers, forming a botnet. Using all these computers, the same pin – 2580
– can be tried on many tens of thousands of bank accounts simultaneously. Even with just a single attempt per account,
they’ll very likely get into one or more that just happen to use that PIN. In fact, we’ve probably guessed the pin
of someone watching this video! Increasing the length of PINs and passwords
can help, but even 8 digit PINs are pretty easily cracked. This is why so many websites now require you
to use a mix of upper and lowercase letters, special symbols, and so on – it explodes
the number of possible password combinations. An 8-digit numerical PIN only has a hundred
million combinations – computers eat that for breakfast! But an 8-character password with all those
funky things mixed in has more than 600 trillion combinations. Of course, these passwords are hard for us
mere humans to remember, so a better approach is for websites to let us pick something more
memorable, like three words joined together: “green brothers rock” or “pizza tasty
yum”. English has around 100,000 words in use, so
putting three together would give you roughly 1 quadrillion possible passwords. Good luck trying to guess that! I should also note here that using non-dictionary
words is even better against more sophisticated kinds of attacks, but we don’t have time
to get into that here. Computerphile has a great video on choosing
a password – link in the dooblydoo. What you have authentication, on the other
hand, is based on possession of a secret token that only the real user has. An example is a physical key and lock. You can only unlock the door if you have the
key. This escapes this problem of being “guessable”. And they typically require physical presence,
so it’s much harder for remote attackers to gain access. Someone in another country can’t gain access
to your front door in Florida without getting to Florida first. But, what you have authentication can be compromised
if an attacker is physically close. Keys can be copied, smartphones stolen, and
locks picked. Finally, what you are authentication is based
on… you! You authenticate by presenting yourself to
the computer. Biometric authenticators, like fingerprint
readers and iris scanners are classic examples. These can be very secure, but the best technologies
are still quite expensive. Furthermore, data from sensors varies over
time. What you know and what you have authentication
have the nice property of being deterministic – either correct or incorrect. If you know the secret, or have the key, you’re
granted access 100% of the time. If you don’t, you get access zero percent
of the time. Biometric authentication, however, is probabilistic.There’s some chance the system won’t recognize you… maybe you’re wearing a hat or the lighting
is bad. Worse, there’s some chance the system will
recognize the wrong person as you – like your evil twin! Of course, in production systems, these chances
are low, but not zero. Another issue with biometric authentication
is it can’t be reset. You only have so many fingers, so what happens if an attacker compromises your fingerprint data? This could be a big problem for life. And, recently, researchers showed it’s possible
to forge your iris just by capturing a photo of you, so that’s not promising either. Basically, all forms of authentication have
strengths and weaknesses, and all can be compromised in one way or another. So, security experts suggest using two or
more forms of authentication for important accounts. This is known as two-factor or multi-factor
authentication. An attacker may be able to guess your password
or steal your phone: but it’s much harder to do both. After authentication comes Access Control. Once a system knows who you are, it needs
to know what you should be able to access, and for that there’s a specification of
who should be able to see, modify and use what. This is done through Permissions or Access
Control Lists (ACL), which describe what access each user has for every file, folder and program
on a computer. “Read” permission allows a user to see
the contents of a file, “write” permission allows a user to modify the contents, and
“execute” permission allows a user to run a file, like a program. For organizations with users at different
levels of access privilege – like a spy agency – it’s especially important for
Access Control Lists to be configured correctly to ensure secrecy, integrity and availability. Let’s say we have three levels of access:
public, secret and top secret. The first general rule of thumb is that people
shouldn’t be able to “read up”. If a user is only cleared to read secret files,
they shouldn’t be able to read top secret files, but should be able to access secret
and public ones. The second general rule of thumb is that people
shouldn’t be able to “write down”. If a member has top secret clearance, then
they should be able to write or modify top secret files, but not secret or public files. It may seem weird that even with the highest clearance, you can’t modify less secret files. But, it guarantees that there’s no accidental
leakage of top secret information into secret or public files. This “no read up, no write down” approach
is called the Bell-LaPadula model. It was formulated for the U.S. Department
of Defense’s Multi-Level Security policy. There are many other models for access control
– like the Chinese Wall model and Biba model. Which model is best depends on your use-case. Authentication and access control help a computer
determine who you are and what you should access, but depend on being able to trust
the hardware and software that run the authentication and access control programs. That’s a big dependence. If an attacker installs malicious software
– called malware – compromising the host computer’s operating system, how can we
be sure security programs don’t have a backdoor that let attackers in? The short answer is… we can’t. We still have no way to guarantee the security
of a program or computing system. That’s because even while security software
might be “secure” in theory, implementation bugs can still result in vulnerabilities. But, we do have techniques to reduce the likelihood
of bugs, quickly find and patch bugs when they do occur, and mitigate damage when a
program is compromised. Most security errors come from implementation
error. To reduce implementation error, reduce implementation. One of the holy grails of system level security
is a “security kernel” or a “trusted computing base”: a minimal set of operating system software that’s close to provably secure. A challenge in constructing these security
kernels is deciding what should go into it. Remember, the less code, the better! Even after minimizing code bloat, it would
be great to “guarantee” that code as written is secure. Formally verifying the security of code is
an active area of research. The best we have right now is a process called
Independent Verification and Validation. This works by having code audited by a crowd
of security-minded developers. This is why security code is almost always
open-sourced. It’s often difficult for people who wrote
the original code to find bugs, but external developers, with fresh eyes and different
expertise, can spot problems. There are also conferences where like-minded
hackers and security experts can mingle and share ideas, the biggest of which is DEF CON,
held annually in Las Vegas. Finally, even after reducing code and auditing
it, clever attackers are bound to find tricks that let them in. With this in mind, good developers should
take the approach that, not if, but when their programs are compromised, the damage should
be limited and contained, and not let it compromise other things running on the computer. This principle is called isolation. To achieve isolation, we can “sandbox”
applications. This is like placing an angry kid in a sandbox;
when the kid goes ballistic, they only destroy the sandcastle in their own box, but other
kids in the playground continue having fun. Operating Systems attempt to sandbox applications
by giving each their own block of memory that others programs can’t touch. It’s also possible for a single computer
to run multiple Virtual Machines, essentially simulated computers, that each live in their
own sandbox. If a program goes awry, worst case is that
it crashes or compromises only the virtual machine on which it’s running. All other Virtual Machines running on the
computer are isolated and unaffected. Ok, that’s a broad overview of some key
computer security topics. And I didn’t even get to network security,
like firewalls. Next episode, we’ll discuss some specific
example methods hackers use to get into computer systems. After that, we’ll touch on encryption. Until then, make your passwords stronger,
turn on 2-factor authentication, and NEVER click links in unsolicited emails! I’ll see you next week.

The benefits of certifying IBM Spectrum Scale with the Hortonworks Data Platform by Par Hettinga

So what is IBM launching with Hortonworks in 2Q 2017? We are certifying IBM Spectrum Scale with the Hortonworks Data Platform. We certified IBM Power Systems for the Compute part of HDP in April 2017. What we are now certifying is that the Hortonworks Data Platform can run Spectrum Scale as the Storage layer instead of the default Hadoop Distributed Filesystem, on both Power Systems and x86 using the Spectrum Scale Transparent HDFS Connector. Since this certification is for the Spectrum Scale software, it applies to both the software only version of Spectrum Scale and our integrated appliance called Elastic Storage Server that runs on 2 Power Servers with the Spectrum Scale software and storage hardware in a single node. The main client benefit for running Hortonworks HDP with IBM Spectrum Scale instead of HDFS is the big cost savings resulting from the reduction in the data footprint at the customer site and the ability to do in-place analytics. What happens with the Hadoop Distributed File System in a traditional application environment is that data is stored in multiple NAS boxes and you have to move the data from these NAS filers to the Hadoop Distributed Filesystem before you can run your Hadoop analytics and when this is completed, you will need to move the results back to your NAS filers. As the amount of data that needs to be analyzed grows into the multi Terabyte and Petabyte range, the moving of data from the NAS filers to HDFS becomes not only cumbersome but a very time consuming process potentially taking many hours or even days resulting in stale data being used to generate results because of the long copy process. Because IBM Spectrum Scale supports multiple Storage protocols like POSIX, NFS, SMB/CIFS, iSCSI plus SWIFT and S3 for Object Storage, we are able to build a huge Data Lake and run in-place analytics without the need to copy data as in a typical Hadoop HDFS workflow. What happens is that the applications can store the data in the Spectrum Scale filesystem which is the same place the Hadoop analytics jobs are performed, because now data can be accessed using the Spectrum Scale Transparent HDFS Connector. The second major client benefit is that HDFS normally does a default 3 way replication for data protection and performance. So if you have 5 PBs of data, with a 3 way replication you will need 15PB of storage. Using the IBM Elastic Storage Server running IBM Power Servers and Spectrum Scale software, plus GPFS Native Software RAID you eliminate the need for 3 way replication. So for 5PBs of data you will only need 6.5 PB of Storage. So a cost saving in Storage capacity of more than 40%. In summary, eliminating the need to move data from NAS filers to HDFS, and reducing the amount of storage needed for running Hortonworks HDP, provide compelling reasons for clients to move to an IBM Spectrum Scale or Elastic Storage Server based analytics solution. To get more information on this offering please visit the IBM Spectrum Scale Website.

MSP Databalance Case Study: Business decisions in real time based on fast data-driven Analytics

In our modern datacenters, we prefer IBM infrastructure. For our environment we have a mixed platform from Intel, Linux, Power i and AIX.
For this we use the various IBM Power P, Power i and Intel servers. All these systems are linked using the capabilities of the IBM San Volume Controller, FlashSystem V840 and V7000. As central storage solutions we use the V3700, V7000 and V840 storage systems, because of their excellent speed, reliability and low operating costs. The San Volume Controller is used to easy tier, real time compression and mirroring. With these standard techniques present at our systems we are redundant and thus high available. To insure continuity, we are using Tivoli Storage Manager software on most platforms, fully integrated with our SVC solutions. Together with IBM we can offer all possible cloud solutions for our customers, IAAS, PAAS, SAAS. The SAP platform used by Beeztees is hosted by Databalance Services. Databalance advised Beeztees to put the database servers on IBM Flash Storage. This IBM V840 Flash Storage delivers more than 400 thousand IOP’ s. The other servers have been placed on Easy Tier Storage, resulting in an optimal mix of speed and capacity. In practice the generation of reports, lookup jobs and batches are processed much faster. Databalance is a key partner of Beeztees in the field of automation. Throughout the whole migration Databalance has been involved and has advised and supported us. The result of the last months is a very modern and “state of the art” ERP platform based on SAP software and IBM hardware, which enables Beeztees to stay a few steps ahead of the competition. IBM Spectrum is based on software-defined storage and it enables users to obtain increased business benefits from their current storage products whether from IBM or another vendor. IBM has pioneered in this field since 2003 and supports more than 265 storage systems from several brands. This give you more value from earlier storage investments. Databalance is making use of the IBM Spectrum family in serving its clients. The IBM Spectrum Virtualize is a giving maximum flexibility and reliability by virtualizing the storage. You can also get more benefits by using features like Real Time Compression and Easy Tier. And of course you can create a disaster recovery environment by implementing remote mirroring. With IBM Spectrum Protect you enable a reliable, efficient data protection and resiliency for software defined, virtual, physical and cloud environments.

Ubiquity to enable IBM Spectrum Storage in Containers (Docker & Kubernetes) by Robert Haas

As the CTO for Storage Europe in my previous update I had mentioned I had mentioned that we intended to deliver a way to integrate our Storage in container environments such as Docker Swarm and Kubernetes. Well, this is now a reality, and it is called Ubiquity, thanks to the hard work of a team involving our Research and Development labs across the world. Ubiquity is available as open source, in experimental status at this time. Let me briefly explain here where we see the adoption of containers, and what is this Ubiquity technology enabling in a bit more detail. Many surveys are showing that the adoption of containers, and more specifically Docker, is accelerating, also in the enterprise environments. You may have noticed the announcements by many large companies intending to adopt container for most of their infrastructure. This covers many use-cases, such as traditional applications, HPC, cloud, and devops, for instance. In HPC, the portability of containers ensures that a workload can go from the testing laptop of a scientist to the big supercomputer without changes, that is from quality assurance, to staging, to production, with the same code. In a cloud environment, whether on-premise or not, containers are attractive because they deliver the best resource utilization and scalability, with the smallest footprint and the highest agility. Finally, for devops, containers simplify and accelerate application deployment through the reuse of components specified as dependencies, encouraging a micro-service architecture strategy. In summary, containers are a standard way to package applications and all its dependencies; they are portable between environments without changes; they isolate unique elements to enable a standardized infrastructure; all of that in a fast and lightweight fashion. Now, with the adoption of containers increasing beyond just stateless things such as a load balancer or a web application server, there is a need to provide support for persistent storage, that is, storage that remains after containers stop, so that data sets can be shared, so that the output of analysis can be retrieved by other processes, and so on and so forth. For many adopters of container technology, the persistent storage and data management are seen as the top pain points, hence storage vendors have started to support ways to enable their products in the Docker and the Kubernetes container environments using what is called plug-ins. With the technology we call Ubiquity, because it is targeted to support all of the IBM Storage, in all of the types of container environments, we have now released this ability as well. As I said, it is available at the moment in experimental status, so we’re welcoming feedback, and you can download it as open-source from the public github. In a nutshell, Ubiquity is the universal plug-in for all of IBM Storage. With this plug-in, and the underlying framework, storage can be provisioned, and mounted directly by the containerized applications, without manual interventions. This is key to enable the agility in an end-to-end fashion. This allows you to take advantage, for instance, of our Data Ocean technology such as Spectrum Scale in container environments. This way, you can also take advantage of the unique capabilities of Scale, in terms of performance, scalability, and information lifecycle management. And you can also seamlessly integrate our block storage such as Storwize. We are convinced that containers are going to play a role as important as VMs, if not more. Containers are already the norm in the IBM Bluemix offerings, and have been adopted by our Power and Z products. With Ubiquity we’re now able to close the loop with Storage. We’re collaborating with a number of clients testing Ubiquity already now, so that we can develop this technology to match our clients’ needs. Among many other things, we intend to adapt Ubiquity to the rapid changes occurring in the container frameworks such as the CSI (for Container Storage Interface), currently worked on by the Cloud Native Computing Foundation’s (CNCF) storage working group. To conclude, with this you will get the best of new generation applications with the performance and enterprise support of IBM Storage.

Western Allied Mechanical Talks About Using Spectrum by Viewpoint

I’m Angie Simon. I have been at Western Allied now almost 30 years. This is my 30th year. And I am the President of Western Allied, been since 2008. Western Allied is a 56 year old company. We started in southern California in LA, three founding partners put it together. We are a design build mechanical contractor. The company has grown quite a since the millennium. In 2003 we separated from the southern California Western Allied and formed our own independent company, Western Allied Mechanical. And last year we did $81 million worth of business. We are a union contractor, so our manpower in the field goes up and down based on our workload. But at our peak we’ve had about 250 people. And the Bay Area is a wonderful place to work. It’s definitely the heart of the Silicon Valley and technology keeps us moving here in the Bay Area. The culture in the company is very distinct. We have people that have been here, I mean I have been here 30 years, but I have project managers who’ve been here 22 years. I have a superintendent that’s been here almost 30 years. We have a lot of company functions and I think it keeps the culture and the people very happy. We have different trades. We have union pipe fitters and union sheet metal workers here and we try not to be us against them. We’re all under one flag, so it’s a great culture here, great company to work for. You have clients that are very dynamic. Their needs change in what is going on from the time that they have started the project to the time that we’re delivering it because they’re dynamic companies. You have startup, you have bio-pharm, you have tech and a process of design build can take six months and in that six months, those needs change. What I like about my job is that there’s something different every minute. I work for a team, so there’s constantly five, six people coming up to me asking for different things in a day. It’s getting information, passing information between the office and the field or subcontractors, making sure everybody’s all on the same page. Financially tracking the jobs, making sure that we’re going to be successful on it financially. Back in 2005 when Western Allied was looking for an alternative accounting financial management package, we were using a homegrown program that somebody had written in southern California and sold it to a number of contractor. So at that time, a search was instigated and Spectrum won out. One of the reasons we picked it in the first place was besides the accounting side that we felt was robust, we felt that they had the desire and intention to get a project management module so that eventually we would be as project managers will do everything within the program. That was the ultimate goal. That was the ultimate goal. We’ve recently started doing our work in progress reviews in Spectrum. It’s made it a lot faster. How I use Spectrum here in the shop is we’re starting to do some productivity tracking. How many pounds per hour or square feet per hour? How many pieces per hour? I can go onto Spectrum, get the full inventory of what was charged to each job. I can also go on and check the hours, how many hours were spent for each task. Using the job compliance feature has really helped in keeping us on track with knowing when all our documents are done and completed. It makes it very easy to see with the dashboard that you’re out of compliance and you can just click on that module or on that dashboard and get directly to the page that you need on where you’re out of compliance. The reason we like to do payroll in-house was because of Spectrum. Being able to track all the different codes and all the different time that the guys are spending in the field on this different tasks and codes and that way we can then estimate our jobs better and get more jobs and make more money. Spectrum allows us to take it from time entry to completion, being able to pay taxes and everything else and all of that data that we need is right there for us. Document imaging is probably the cornerstone of what we do in project management. We try to give our client an exceeded expectation, and to do that we have found that we need to have instantaneous document management, that we need to have that access. Last year when I saw Service Tech at the User Conference, I was really excited because it looked to be just what we needed. We wanted something that we could push out to our field, that where we could push out work orders and service calls and for preventative maintenance service calls to our field guys, to be able to get the time back in. This will take the billing process from a CFO. That’s always important and it just looked outstanding. The dashboards have been very helpful in my work alone to be able to just click on a dashboard and have it bring you to the place you need. Our PMs are just starting to use the dashboards and we’ve created a template that loads several dashboards onto their home screen, as well as putting folders in the right hand corner so that they can access all of the documents and information they need very quickly. One of the great things is that you can go home and be able to log right into Spectrum without having to remote in to your desktop or anything. You can pull it up anywhere. So somebody calls you with a quick question, you’re like, “Hey, I don’t know,” you’re able to go collect your iPad, go right in and find what you need. I have a tablet that I use to, I can connect in the field and log in to check things if I need to. I mean before we’d have to VPN in and then connect. One of the things that I always talk about that impresses to no end is your technical support. Invariably, I don’t think I’ve ever not called and had a live human being on the other end, knowledgeable human being that could answer my questions. And if they didn’t know the answer, they set me up with somebody that did and I had an answer quickly. It’s just a outstanding product.

Christenson Electric talks about using Spectrum

Mark Walter, president of Christenson Electric. When I came back in 2003, there was a redirect to a more core customer, core competency base as opposed to just being the standard, typical electrical contractor. We talk about exceeding customers’ expectations a lot. We peak a little over 400 to 430 employees during our heaviest summer months, typically during the heaviest construction season. From a revenue standpoint, we’re just shy of $100 million a year. Christenson’s really known for customer service and our service work and our red vans. I mean, everyone around town, if they see that you work for Christenson, they’ll say, “Oh, you’re the one with the red vans.” It’s crazy what a red van with one word on it, what it symbolizes. The first word that comes to mind is tradition. We have our fingers into everything across the trade, whether it be high voltage, low voltage, access control, fire alarm, voice, and data. Obviously, electrical work, service work, construction work, design-build, account work. The list goes on. There’s nothing in here that we can’t do. I’m getting ready to start a ground-up storage facility that’s six stories tall, and we’re carrying everything, turnkey. Christenson’s a can-do company. No job’s too small, and we’re all willing to chip in and help everyone together. We’re really wanting to build long-term relationships. We’re looking for customers that we can be a part of and help them grow over the years. Through our corporate partnership with the Portland Trail Blazers, that’s allowed us to develop additional community partners. And one of those partners has been the Portland chapter of the Special Olympics. This is just another one of the ways that we strive to have an impact in our community. We try to maintain that family-oriented type of atmosphere where people come to work and are here … Hopefully they’re having fun. We really pride ourselves on being the most customer-oriented contractor in town. We always feel that the people closest to the work are the ones that make us successful. And so we try to eliminate any barriers between our field electricians and the customers themselves. Our role is really just to make sure that they have the right tools and equipment to do that job for the customer. A company like Christenson that has multiple employees, multiple divisions, we’re drowning in data. So what we’re trying to do is concise that data and deliver it to our users in a concise manner that allows them to make real-time business decisions. Spectrum is essentially our main data warehouse. It’s where we store a lot of job cost records, payroll records, receivables, payables, and equipment records. And we’re trying to disseminate that information out to our stakeholders. Recently, we’ve implemented the equipment module to better track where our fleet is. We have a very large fleet, and it’s helping us derive of what the true cost is to operate, say, an E-350 van versus a new Transit van and help identify when we should be retiring assets, what it really costs us to run that vehicle. When my guys make purchases through the RPL system, I’ll go through Spectrum maybe once, twice a week just to prove our invoices. And by doing that, it allows us to be able to manage those projects so that the guys in the field don’t have to manage that. Like I said, they’re electricians. They need to worry about electrical work. I start my mornings off on the dashboard just looking at overall where’s our balances, where’s our large subcontractor payments and just trying to get a good overall view of how the day is going to look for job costing and accounting. And then we’re developing individual screens and dashboards for the project managers so that they can do that same thing for their jobs. I will use it to run a billing status report where I’m extracting the data from Spectrum, putting it into an Excel spreadsheet, and then distributing it, letting everybody know, “Here’s your jobs. Here’s your costs. We need to bill this,” whatever. So we created our own dashboard app. It was called My AR Aging Tools. And this gave the ability for the project manager to log in and see his customers’ aging totals versus having to drill down and pull a report from that. So they would see if the customer was 1 to 30 days off, 30 to 60 days off, or 60 to 90 days off. One of the biggest reasons that we went with Spectrum is their time and material billing process A lot of our jobs are just a quote to a customer for $1,000. We put it in the system, cost hit the job, and then we bill it. So the AR literally takes seconds to bill a single invoice. Our focus is really trying to get the information into the hands of the people that can use it quickly. So we’re looking to use Spectrum to put everything in the hands of our technician so that he can bill on the job before he leaves the site. Well, Spectrum helps us by being mobile. We have job sites all across the Northwest, including all across the United States where we might pick up a job site. So it allows our division managers and our project managers on a site to be able to instantly pick up some data they need. Being able to access Spectrum anywhere on anything at any time is really great. I’ll leave my electrician on the line and say, “Give me a minute,” go through Spectrum, log in, pull it up, and I can give him his information on his project within minutes. What I like about the low IT strain, we’re able to maintain with one IT guy, multiple users, multiple sites. It’s a stable, steady product. For me, it’s very easy. I’ve had our guys set it up to where I can just single-click and pull up what I need to look at. What’s really great is the fact that you don’t have to go in and out of the screen to get to where you want. So if you’re in a job maintenance screen, you can go right to a report. You can go right to a finance. Everything’s right there. I’d recommend Spectrum. I have recommended Spectrum. Data and information is power, and a greater clarity that we can distill that data or power, whether it’s through software like Spectrum, being able to use that information on a daily basis helps us to make the best business decisions for our company.

Robbie Baxter | "The Membership Economy" | Singularity University

(music) – So Robbie, first of all,
what is The Membership Economy? We've heard about it
from American Express, they're probably the most well known for the Membership Plan. – So, it's a massive
transformational trend that I've seen with
virtually every industry, from software, to hospitality,
to financial services, and it's all about a move
from ownership to access, from anonymous transactions
to known relationships, and from one-way communication where you're just pushing
messages at your customer to an open conversation, not just between you and the consumer, but also among the customers
themselves under your umbrella. – So this sounds like this is
a fundamental strategy shift. I mean, really a very
different way of thinking about how you engage with customers at every aspect of your business. – Yeah, absolutely. It's about putting the customer at the center of everything you do instead of the product or the processes or even the technology. – Tell us about some of the core elements that are enabling this
transformation to happen around membership. – So membership is not new, right? I mean we've had membership
since the 12th century, trade guilds and religious groups. But what's happened
recently, two big things. One of them is that technology has extended the infrastructure that enables trusting relationships. So, we've always wanted to have these long-term relationships with the companies that serve us, but now it's possible to do that not just with companies
that we know personally like the shop around the corner, but actually with organizations where we've never met anybody. And these are through technologies like always-on devices, mobility, artificial intelligence that gives us a personalized experience; the
ability to connect networks. All of that is enabling
new ways of relating. The other thing is, the
influx of financial capital, that is giving entrepreneurs
a longer runway to build relationships
with their customers before they actually
have to generate revenue. – So, the fact that we
are always connected, and we have so many
different ways to connect, is enabling these organizations
to think differently about how to be a part
of those connections. – Yeah, it's like a new palette of colors that you can use when you're
painting your business model. – I love that; can you
give us some examples about some companies
that have taken advantage of this new palette? – Yeah, well there's two groups; there's what I think of
as the digital natives, the Amazons, LinkedIn, Netflix, who started their
businesses thinking about the forever transaction, thinking about this longterm, member-oriented approach. And then there are companies
that have transformed to membership models, companies
like Intuit and Adobe, who have moved from these
anonymous box transactions to a real ongoing relationship,
subscription model community with the people they serve. – So Robbie, you know that
at Singularity University we spend a lot of time
talking about impact. Does the membership economy
work in the social sector? Are there other examples that
you've seen of organizations that are not necessarily
in the corporate world, that are using this strategy? – Yeah. Well you guys talk a lot
about grand challenges, and one organization that I work with, the American Nurses Association, has a grand challenge going on right now where they're focused on helping the 3.5 million
nurses in the US get healthy. Because nurses are among, on the five major elements
of health, which is like, stress, sleep, weight, smoking,
and I think drugs, maybe. I think those are the five. They perform less well than the American population at
large, in four out of five. So they're really using,
they're using online community, they're using their subscription model, they're using their live events, all to support this initiative, this grand challenge around making nurses as healthy as possible this year. – I think that all leaders are gonna need to really take a hard look
at their business model. What suggestions would
you have for leaders that want to really understand how to get into the membership economy, and how to make sure that they really are getting their leadership team prepared for thinking very
differently about strategy? – So I think the first thing
is to get them out talking to customers, and really understanding what is the value that they provide? What is the, you know as
Clayton Christian says, "What's the job that your
product does for them?" And that's one piece. Also, getting into their
shoes and understanding what technologies they expect
and see as the new normal. And the other thing is not getting too wrapped
up in the technology. Because even though technology is great, it's not great when it's not in service to an actual benefit, for the person you're trying
to serve, the customer. – So how do we become more curious? How should we look at new businesses? What are some questions we should ask? – That's a good question. So, becoming more curious is, it's innate. We all are curious. If you get back, if you've been with a
four year old recently, you know we are born to ask questions. And over time I think we
get embarrassed about it, or we think we know too much. So, what I'd suggest is, ask
questions, look at businesses, think to yourself, why is
this business successful? What can I learn from this business? And putting things together,
a lot of people have said to me, we want to be the
Netflix of our industry. And on some level you
can't copy an organization. On the other hand, if you say,
what would that look like? So you'd ask the second question. So okay, great. What would it look like if you're the Netflix of your industry? What would that be? If you Amazoned your competitors,
what would that mean? And so sometimes, just
asking the second question is a great way to really
break open the paradigm. – Great, well so many wonderful things that you have to share. Go and talk to customers. Ask better questions. Try to understand which organizations are doing well, and why. Be curious about it.