Windows Server Summit 2019 event


[MUSIC]>>Welcome, everyone. My name is Jeff Woolsey, Principal Program Manager for
Azure Stack HCI and Windows Server, and I’m joined today by?>>Vijay Kumar, I’m the Director
of Product Marketing. I lead the Windows Server business. It’s great to be here. Coming up live, we
have awesome sessions for you on Hyper-converged
Infrastructure, a demo-filled session on
Windows Admin Center, a quick overview of
System Center 2019, and three ways to modernize Windows
Server Apps with containers. We also have deep dive sessions that you will be able
to access on-demand.>>In addition, we have question and answers while you’re watching, we have experts that
are online right now, and of course we’re going to
have a little bit of fun. Cosmos is going to take
us behind the scenes, interview some of our developers, some of our folks that go-ahead to make Windows Server
the product it is. Really excited and we’re going
to have a lot of fun today. In addition, we have
a really cool contest. So if you earn 250 points, you’ll be entered in
the prize raffle to win an Xbox or Surface headphones or
an equivalent gift certificate, and we have five Xboxes
and 10 Surface headphones. So you can earn points by completing
the following activities, watching the live broadcast, taking the event survey, completing the knowledge check, and chatting with us in a QA window. So we’re going to have a lot
of fun today, and again, five Xboxes and
10 Surface headphones.>>Over the last few months, we made some exciting announcements. We announced Windows Server 2019, which has become one of the more popular Windows
Server versions ever. We also announced
System Center 2019 in March to support Windows Server
2019 in your data centers. We announced Azure Stack HCI, which is a Hyper-converged
Infrastructure product. We also announced the latest release
of Windows Admin Center, which is a very popular
management tool, and we just announced VMware on Azure to meet
customers where they are.>>This is really a huge announcement because we know that a lot of our customers run Windows Server and Microsoft
workloads on VMware. So this is really exciting because now it means you can
take advantage of the VMware vCenter
management interface you already are familiar with, use those up in Azure
and make it really easy to plug those in to Azure resources.>>We are announcing the public preview of
Azure Kubernetes Service, AKS, support for Windows
Server containers in public preview today. Kubernetes is the leading
orchestrator containers. With this, now you have
the capability to use AKS to manage containerized
Windows Server apps in Azure.>>This is really
huge because we know that people are taking
advantage of containers, and Kubernetes is really the standard orchestration
that folks are adopting. Well, with now support for
Windows Server containers, people can build
application to utilize both Linux containers and
Windows Server containers, and use these together in the
Azure Kubernetes Service. So big announcement today in the world of Azure and the
Azure Kubernetes Service. Of course, this is an opportunity
that we want to take a moment to say just a huge thank
you to all of you, to our customers, to our partners. Whether you’re a hardware
partner or software partner, we want to just say thank you because Windows Server is
an incredibly popular product. It’s one of the reasons we’re
doing this Windows Server Summit, is because so many of
you came out and said, “We really want to have something
very specific tailored for us that love and use
Windows Server every day.” So huge thank you to
wherever you are.>>Big thank you to our customers. Most organizations today run their
critical workloads on-premises. When it comes to on-premises
server workloads, more than 70 percent of
them run on Windows. Windows Server supports a broad array of workloads and apps every day.>>There’s a good reason for that. It’s because one of
the things we focus on is tremendous amount of innovation. Now, if you’re looking at
this and you’re going, “Hey, I’m still in the Server 2008
or 2008 R2 camp,” guess what, there’s a lot of innovation that
you’re not taking advantage of, whether it’s data center scale,
software-defined, mission-critical,
Cloud-ready, new application innovations such as containers
or Hybrid and Hyper-converged. We’ve been making
a tremendous amount of innovation, adopting that in Windows Server. Today, of course, we also have
another announcement as well.>>We’re announcing
Windows Server version 1903 in insider preview. This is our semiannual
channel version.>>Yes. With our semiannual
channel releases, these are focused on containers and application innovation that
runs within the guest. So for folks that are
building new applications, you’re going to love
all the new capabilities that are coming into 1903. Lots of container
innovation, Kubernetes, network overlay, performance
improvements, and so much more.>>We discussed a lot about
Windows Server that is powering your workloads on-premises, but Windows Server technologies
permeate everywhere. Windows Server
technologies power Azure, Azure Stack, and Azure Stack HCI. So it not only runs your workloads as virtual machines or guests
as we call it sometimes. It also powers the
infrastructure, the foundation.>>That’s right. For you, that means not only having a trusted set of technology
across multiple platforms, but also Windows Server
is truly everywhere, whether it’s the infrastructure
layer, like I said, whether it’s Azure or whether
it’s running on hardware, or whether it’s running in
the virtualization layer, whether it’s running in
a gas or whether it’s running on the bare metal. Windows Server on Azure Stack
or Azure Stack HCI, on-premises on your physical
or virtual machines, or up in Azure. Windows Server technologies
is that technology that truly permeates on-premises
hybrid and the Cloud. Now before we go on, I’d
like to take a moment to talk briefly about
my career journey. I entered the tech industry
because I enjoy the creativity, the challenges, and the ability
to make people’s lives better. Let me give you
one quick example from an e-mail we received
a few years ago, and literally, this is what it said. “Good morning. Hurricane Sandy
hit our area badly. Many downed trees,
even on my wife’s car. Flooding and total
power cuts everywhere. We’re very grateful
that everyone is well. I want to thank the
Microsoft Server team for giving businesses the new
replication features. Two of our clients, both whom cannot be without
their infrastructure, were flooded and entirely and will take weeks to get back
into their businesses. At 07:00 PM last night, we failed over their entire domains
to the secondary replica site and they’ve been able to continue their daily business
with zero interruption. Windows Server saved their business.” Now, personally, being
a part of the team that delivered these capabilities
and help us protect you, that’s one of the things
that truly drives me. So throughout my career, one constant has been changed, whether it’s been
scale up, scale out, new hardware and new software, the explosion of data
change has been a constant. We’re in the midst of the next
major inflection point, Cloud. Now a large portion of you have told me that you’re
embracing the move to hybrid and excited by
the possibilities, I am too. I’m excited by giving you enterprise-grade technology
with consumer ease-of-use and making it easy for you to enable hybrid scenarios when you’re
ready on your timeline. We’ve asked you what you
want from Hybrid Cloud, and you’ve told us you want the
Cloud for agility to help improve your on-premises
workloads and to make your job less stressful
and more productive. There’s also a few of you
out there that privately admit you’re a little anxious
because of the change. Take a deep breath.
We’ve been here before. For example, we advised
you it would be valuable to learn technologies
like Active Directory, Remote Desktop Services, Hyper-V, and Storage Spaces Direct. In fact, I remember
being onstage with Jeffrey Snover urging
you to learn PowerShell. These days, when I ask for a show of hands of who’s using PowerShell, about 80 percent of the hands in
a crowded conference room go up. That’s awesome. If you’ve added these technologies
and skills to your resume, you’ve made yourself more valuable. Well, the next wave of
technologies is here. Windows Admin Center,
Hyper-converged Infrastructure, Containers, Azure Stack,
Azure Stack HCI, and of course, Azure. We’re making big bets here, and I urge you to learn
these technologies and embrace distributed hybrid Cloud. Windows Server 2019 is taking off. It’s building off the fastest, most widely adopted version of Windows Server ever
Windows Server 2016. In fact, if you look
at these four pillars, three of them are the same. Why? Because they resonated
so loudly with you. Unprecedented Hyper-converged
Infrastructure, faster innovation for applications like containers and more,
enhanced security. Those are three from
Windows Server 2016, and we added a fourth
because you asked for it: Hybrid Data Center Platform. So what does that mean? It means built-in integration with
Azure Active Directory, backup, site recovery, storage migration, Azure Virtual Networks making this as easy as a couple of clicks away. In fact, one of my favorite
demos is VM replication. Where literally replicating up to
Azure with a VM is one click. You can see, we’re just
going to continue to invest and develop
in hybrid even more.>>System Center 2019
is now available. It was made available in March and its primary purpose is to manage your Windows Server
2019 in your data centers. System Center 2019 added
several capabilities such as Hyper-Converged Infrastructure
management; hybrid management, which means it works really well with Azure management capabilities; it also improved
management experiences by adding HTML5 dashboards
and easy notifications; and of course we added several
performance improvements as well.>>There’s a lot more to come. We’ve got sessions on
this as well today. We’re going to dive deep into
System Center 2019 as well.>>Absolutely.>>You can try it now at
www.Microsoft.com/systemcenter. In addition, of course, we also
have Windows Admin Center, your new personal
server management tool. I want to take a moment to
remind everyone that if you already own Windows
or Windows Server, you already own Windows
Admin Center as well. It’s part of the license, so it’s no additional cost.
But what does this give you? So much. Windows Admin Center is the future of all Windows
Server Management. All of the new GUI, all of the new management features that we’re building in Windows Server starting in 2019 and going
forward are all in Admin Center. You can look for storage
migration services. You can look for system insights all throughout 2019
you won’t find it. It’s because the GUI
is in Admin Center. It allows you to manage your Windows Server instances
wherever they reside, whether its on-premises,
in a physical machine, in a virtual machine, on Hyper-V, on VMware, in Azure. Wherever Windows Server resides, I can manage it with Admin Center and you get a consistent way
to manage it whether you’re running Windows Server as a full with the full desktop GUI
or even a Server Core. You get the same management
experience because you’re managing it remotely
from a web browser. There are no agents required. You literally download Admin Center, double-click “Install it”, and in less than 60 seconds you’re
managing your server. So it’s extremely,
extremely powerful. On top of just managing your Windows Server
instances, you get more. You get Hyper-Converged management
all through Admin Center, and I’ll show you that
in just a minute. You can also integrate with
Azure with hybrid services. Again, such as backup, DR, monitoring, and so much more. Again, we’re going to go through
that throughout the day. But I want to take
a quick moment to actually give you a quick demo
of Admin Center. So here is Windows Admin Center, and the first thing
I’m going to do is actually I want to click
on one of these servers. So I’m going to go ahead and select this server and I’m connecting. You can see right off the bat
on the Overview page, I can see things that would
normally be in a couple of different tools like
task manager or system properties. I can see my CPU, my memory
usage, the OS version, the domain, the name of it, and so much more all
in a single glance. If I want to get some
more information, for example, I’d actually like to take a look at say some of the events
for this server, I can click on the “Events” tool,
click on “Security”, and you can see I’ve got
a long list of events here. Now, normally, I look at events
and see a 140,000 events, I’d be a little concerned. But no worries, because
I have Admin Center, I’m going to go ahead and click on the “Filter” button here and I’m going to remove the informational
and verbose events. I just want to see
the critical errors, the things that are really important, and I’m going to apply that change. You can see very quickly I
actually have zero item. So nothing to be alarmed about. I’m good from a security standpoint. Now, one of the awesome things
about Admin Center of course is that we
build on PowerShell. One of the most common questions I get about Admin Center is, “Yeah, but you’re going to
require me to plugin, install an agent on my server.” No. No agents required at all. Everything you’re seeing here in
Admin Center is using PowerShell, and WinRM, and WMI to manage
your servers remotely. So as long as you’re
using PowerShell, I don’t need to use any agents here. Well, once we told people that
it was based on PowerShell, they said, “Wow, that’s amazing. I’d loved to see the PowerShell.” So what did we do? One of the first features we implemented
was this “PowerShell” button, and you can see all of the underlying
PowerShell that we’re using, for example, for this events tool. So you can now copy and paste this, use this in Azure Cloud Shell, use it to deliver and
build your own automation. So it’s extremely powerful. In addition, Admin Center
is also a platform. So we’ve made this pluggable. You’ll notice that, on
the left-hand side, we’ve got all these different
tools. Spoiler alert. Every one of those tools
is actually a plug-in. When we launched
Admin Center, initially, that list was probably
about half as long and you can see we’ve just been adding
more and more features. Well, in addition, we’ve made this pluggable to our
partners as well. A moment to say thank you to all of our partners
that are plugging in. You can see DataOn, Dell EMC, Fujitsu, Lenovo, NEC, QCT, BiitOps, Pure Storage, Squared Up, and more. There is so much more
coming down the pike, and we’re going to continue
to focus on making Admin Center the easiest and best way to manage your server
wherever it resides. So the quick thing I always
have to answer for folks as well: How do Admin Center
and System Center work? Does one obviate
the need for the other? No. In fact, they worked
perfectly complementary together. Windows Admin Center is
designed to manage an instance. So for example, if I want
to add roles, add features, actually configure files,
configure shares, update drivers, that’s something
I can do from Admin Center. If I want to deploy bare metal Windows Server to
racks and racks of servers, well, that’s something that
System Center will do. It gives me the ability to
do bare metal deployments, gives me the ability to do
enterprise management at scale. But most importantly, they actually co-exist and compliment
each other very, very well and that’s by design. Again, we’re going to talk more about System Center later in the day. One thing I want to point out, so I’ve talked about hybrid, is let’s also talk
about another pillar which is really the shift
to Hyper-Converged. If you look at how
we’ve been delivering traditional infrastructure over
the last 20-some odd years, generally, your racks of
servers tend to look like this. You probably have a dedicated
storage solution like a SAN, probably using Fiber Channel, which means then you
need a storage fabric. You need Fiber Channel HBAs, Fiber Channel switches,
cables, and more. Then you’ve got your servers
for your hypervisors. You’ve got miscellaneous network appliances for your load balancers, gateways, and more, and then of
course your top of rack switches. What we’re seeing now is we’re seeing a dramatic shift to
Hyper-Converged because essentially we can
deliver all of this now using standard X86 servers, Ethernet switches, and
the magic of software. Software-defined compute,
software-defined storage, and software-defined networking. We give you all of those gateways, we’re giving you a storage fabric, but we’re doing in a much more
efficient modern solution and it’s all managed with
a consistent interface. Let me give you example of one
of those areas where we’ve made huge investments
in Windows Server 2019, and that’s with persistent memory. So if you remember the dawn of Flash, Flash initially shipped
on a USB drive. In fact, it was USB 1. Not particularly fast. It’s about 12 megabits per second. Well, Flash easily exceeded that. So they upped it to USB 2 and USB 3. Well, guess what, Flash is
way faster than all of that. So what did we do? We took
that Flash and we said, “We got to move it to a
different storage connector.” So we moved it to SATA. Then in fact, we moved
it to SAS because Flash got faster than
both SATA and SAS. Well, guess what, we needed
something new. So we said, “Okay. We’ll take that Flash
and we’ll attach it to your PCIe bus. Not a problem. Except that’s in your desktop, that’s not in your laptop.” So we actually had to build
a whole new connector called NVMe. So that’s what’s in your laptop. Well, guess what, Flash
is now faster than NVMe. So again, we have another bottleneck. So what have we done? We’ve taken Flash and we said, “Fine, we’re going to put it on
a DIMM socket and we’re going to plug it in right
next to the processor.” This is called persistent memory. Some people call it
storage class memory, but persistent memory literally is sitting right next to the processor. We can actually treat it as memory, we can actually treat it as storage, or we can actually treat
it as a combination. We can actually divide it and treat
it as both memory and storage. Well, the reason why this is so
important is because when you take this persistent memory and
you couple it with what we’re doing in our investments in
Hyper-Converged Infrastructure, you get a tremendous,
tremendous benefit, which takes me to
Windows Server 2019 and our closed investments with the
Intel Optane DC persistent memory. So as I mentioned before, our Hyper-Converged Infrastructure
consists of Hyper-V, Storage Spaces Direct, and SDN. Well, previously, back in 2016, we set an IOPS record of
almost 6.7 million IOPS. This was done in
September of 2016 using 16 servers running
Windows Server 2016. To date, I haven’t seen anyone
even approach this record. So we were pretty excited
when we said, “Look, Intel, we’d actually like to
run that same test again using Windows Server 2019, but this time we’d like to use some Intel Optane DC
persistent memory.” In case you’re wondering,
it looks just like RAM except it’s different,
it’s persistent memory. So what this means is it’s high performance, it’s
native persistence. So even when you
power off the server, it all still resides there
on the persistent memory. Also take a look at
the capacity, a 128, 256, 512 gigabytes on
a single persistent memory DIMM and up to
three terabytes per socket. So this is ideal for things like Hyper-Converged Infrastructure
and large-scale up workloads like SQL Server. You better believe, by the way,
the SQL guys are plugging into what we’re doing
in Windows Server 2019. So this has full native support in Windows Server 2019
and Azure Stack HCI. This allows us to cache and
accelerate the working set for Hyper-Converged and it’s all
managed through Admin Center. So enough talk, let’s
get to the demo. So first of all, the demo
that I’m going to show you is this is
actually 12 servers. Not 16, 12 servers with 384 gigs of traditional memory and 1.5 terabytes of Intel Optane
DC persistent memory. We’re using 32 terabytes of NVMe storage running
Windows Server 2019. Again, this is 12 servers, not 16. So here’s Admin Center again, and here I am in the
Hyper-Converged cluster interface. Again, this is all Admin Center. This is again you already own this as part of your Windows license. So if we drill in, we can see that I have a total of 12 servers
that are healthy. I have a total of 72 drives. I have a total of
312 virtual machines. Currently, they’re all off. None of them are running at all. In addition, I have
a total again 14 volumes. Again, this is cluster-wide. This is not one server,
this is the entire cluster. You can say I have a total
of 91 terabytes of storage and currently
about 16 terabytes are used. If I want to drill into the servers, you can go to the
“Inventory” and you can see there are all 12 servers. You can see all the information
is here right now. If I’d like to actually look
at the inventory of storage, we can actually group them by type. So I can see all of my Optane
persistent memory here at the top. What’s interesting is, if
you actually take a look, you can see we’re using it as
cache and notice the size. Seven hundred and
sixty-eight gigabytes, that’s kind of
an odd size for a cache. Well, let me let you
in on a little secret. What we’re actually
doing is we’re actually interleaving these devices. So we’re actually communicating with two Optane DIMMs simultaneously just like we would
with traditional RAM. If we scroll down, we can get to the NVMe storage and
let me point out. At a little over 7.5 or 4 terabytes
of storage per NVMe drive, you can see capacity on on
Flash NVMe has really risen. All right. Finally, let’s
scroll to the bottom of this because this is where we’re going
to be showing really the demo, which is the overall
cluster performance. You can see right now it’s idle, we don’t have any VMs running, and you can see our latency
at one microsecond. Not millisecond, microsecond. So the reason I point this
out is because this is where we’re going to be
focusing and watching the demo. Again, the number we want to
beat is 6.7 million IOPS. So I hope everyone is ready. We’re about to get started.
Vijay, one second. I need to fix something
and let me just tap. There we go. I needed that. Thanks. We needed
an additional number. So let’s go ahead and break
right into this demo, shall we? The first thing we’re going to do
is we’re going to go ahead and fire up all the VMs on node 1. We’re just going to just see what it looks like for a single server. We’re going to speed this up because we don’t need to sit here
and look at a gas gauge. You can see half a million IOPS, one million IOPS out of a server. Vijay, what’s really
awesome about this is, I remember trying to get
a million IOPS a few years ago. To try and get a million IOPS, I would have a room filled
with racks of disks, and hopefully, if you
configured everything just perfectly maybe
you get a million IOPS. This is a million IOPS
out of a single server, pretty amazing in and out of itself. Let’s go ahead and fire
up server number 2. Let’s launch all the VMs, let’s get this fired up
and let’s take a look. So there’s our one million IOPS. Again, we’re waiting for the second
set of VMs that coming online, and you can see
two million IOPS and again, latency in the low tens
of microseconds, so this is ridiculous. I can literally say here
all day and do this, but I know we got to
get to other things. So we’re going to go ahead
and fire up the rest of the VMs on the rest of the servers
and let’s take a look. So again, the number we want
to beat is 6.7 million IOPS, and we’re just hitting
four million IOPS. You can see the latency
is still in the low tens, there’s five million IOPS, okay, there’s six, there is seven million, okay, we hit eight million IOPS. So you can see how awesome,
we’re not done yet, nine million, 10 million IOPS. Okay. So we’ve officially
hit over 10 million,ooh, we’re not done yet, 13 million IOPS. So the new record is
13.7 million IOPS. Again, what is most
amazing about this is, we did this on
25 percent fewer servers. Think about this, we just
doubled the IOPS with 25 percent fewer servers,
and if you’re interested, we’ve got this all written
up on our blog and you can take a look at
all of the tests details, and in fact, you can run
these tests if you’d like, we’ve made all the tests
available up on GitHub. So all of this work
that we’re doing in Windows Server 2019
for HCI of course, fits right into what we’re
doing with Azure Stack HCI. Of course, let’s keep in
mind that we really have a family and a portfolio of products. Of course, we’ve got full Azure, everything you need
for PaaS, for SaaS, for IaaS, machine-learning AI, regions around the world, you need to build the
largest, biggest, baddest global planetary
scale application, Azure got you covered. Then we’ve got Azure Stack. So if you want to build an application that’s truly
consistent with Azure, where you literally write an application to
the Azure resource model, you can deploy it on
Azure Stack or in Azure with literally no code change, that’s really exciting
for people that are innovating building
those new Cloud apps. At the same time, we’ve got a bunch of people going, “This is really cool,
but you know what? I got a bunch of traditional apps, and I’m running on
some really old servers five, seven, 10-year-old servers.” These things are insecure, they don’t have TPM,
they don’t have UEFI, they don’t have secure boot, they probably core limited, I’m sure their memory limited, and they’re probably running
with spinning disks. Help me Jeff, get me to
a new configuration that’s secure that allows me to connect and take advantage of
what the Cloud has to offer, and gets me to
a hyper-converged world. That’s what Azure Stack
HCI all about, and it gives you integration with backup site recovery
and so much more. This takes us to
our hardware partners.>>Right. We took all of the exciting capabilities
that you showed Jeff, of 13 million IOPS and we
put it into Azure Stack HCI. We launched our Azure
Stack HCI in March. Now, you can get
preconfigured, validated, and supported Azure
Stack HCI’s solutions from 15 different partners that
are listed here on the slide. You can go to
Microsoft.com/AzureStackHCI to get any of these solutions, you can find a whole catalog
there that you can choose from. We showed you a number of
capabilities and innovations that we put into the latest versions of Windows Server, Windows Server 2019. Now, I want to talk to you
about Windows Server 2008 R2. Windows Server 2008 R2 was one of the most popular versions
of Windows Server, and it is ending support
on January 14th, 2020. Not to worry, we have
options for you. If you’re on Windows Server 2008 R2, if you’re running applications that are business critical
or mission critical, you can easily take them as is and move them to Azure and
run it on Azure VMs, and we’ll give you three years of free extended security updates so you can take your
time to modernize, to either move to the latest version
of Windows Server, or to move to containers or other Azure
services that are available. If you want to stay
On-Premises you can upgrade On-Premises to the latest versions
of Windows Server 2016 or 2019. Finally, if you do want to stay On-Premises and you
need time to upgrade, you can always buy extended
security updates On-Premises.>>Now, one thing I
want to point out is, we talk about people that go
through the modernization process. One thing I wanted to point out, it was kind of a path that we generally see repeated over and over. Generally with Azure, what we see
people do is they say, “Look, I’m going to take
advantage of Azure AD, I’m going to take
advantage of Office 365.” People start getting
setup with Office 365, they get their identities configured, everything just happens there, and we see very quick success, and then they go, “Wow, that
was really easy and great. What are the next things I can do?” Generally, it happens to be storage. Can I take advantage of
the unlimited storage pretty much in Azure to give me
better support for backup, for replication, for data protection? Can I use Azure as a way
to help me with that? Then, we start to see people
moving to the rehosting which is, “Okay, I’ve got some old virtual
machines and you know what? I’m trying to figure out what
I’m going to do with them.” So I have some apps still
running on 2008 or 2008 R2 or even 2012. What do I do? Well, I know that these are still
critically important to me. So at some point,
I’m going to have to modernize those and figure
out what I’m going to do. But right now, I’m just going
to move them up and rehost them in Azure to take
advantage, for example, of that three years of
extended support because it gives me a little bit of a cushion now just to figure out what I want to do. So people move those
up into Azure IaaS. Now, it gives you the option
for what you want to do. How are you going to
refactor this application? Do you modernize that application
and move it into a container? Is it a traditional.NET application? Is it a Java application? Is it a Middleware that’s headless? That’s a perfect candidate
for a container. Or do you want to
say, “You know what, I would just like to
take advantage of the capabilities in Azure
and rewrite this as a platform as a service
technology and then take advantage of more of
the native IaaS services in Azure?” So this is generally what we
see in terms of the PaaS as people looked to how they want to modernize and innovate with Azure. So looking forward, I want to
take a quick moment to say, “What’s going on with Windows
Server going forward?” So let’s start with
the insiders builds. We have insiders builds
every two weeks. So anybody who tells me, “Jeff, I don’t know what’s going
on Windows Server.” The first thing it says,
“Get on the insiders list and you’ll get full access to
the next release of Windows Server, you can see what we’re working on.” That’s Admin Center, that’s
a semi-annual channel, and it’s the long-term
servicing channel. Next, we have the semi-annual
channel releases for Windows Admin Center and for Windows Server that we
released twice a year. I want to focus on the fact that
the semi-annual channel for Windows Server is focused on container innovation and for
running in virtual machines, it’s designed for applications. These releases come out twice a year and they’re
supported for 18 months. They’re perfect for containers we are literally just changing
out a file and again, we do this every six months. Then of course there’s
the long-term servicing channel or as I like to refer to it, this is business as usual, this is how we’ve been shipping Windows Server for the last
couple of decades. Every two to three years
just like clockwork, we went from Windows
Server 2016 to 2019, and I feel like I’m announcing
something here Vijay, because we know when Windows
Server vNext is going to be, it’s going to be within that next
two to three year timeframe. So if you want to consider
this a product announcement, you’re the marketing guy, so as long as you’re cool
with me saying that. Sure, as long as we
don’t announce the date. I’m not announcing any date, except that it’s within
two to three years. Definitely. We are releasing Windows Server vNext in
the next two to three years. You heard from the man, that’s it. So we’re going to keep
that clock on just like usual. So what are you waiting for? You can try Windows Server today. You can try it in Azure free, you can set up your free
Azure account and get $200 of credit and a number of free
Azure services that you can try. You can simply use Windows
Server in Azure and not to have to install Windows Server on any of
the servers On-Premises. If you want to move your
production workloads to Azure, you can also save up to 80 percent
using Azure Hybrid Benefit, that is only available to
Windows Server customers. So if you want, again, you can download
the Eval Vue version, you can run it on
Hyper-V on your laptop, you can run it on your
infrastructure and try it out today. The Eval bit if you’re saying, “Hey, I don’t have room for this right
now, I don’t have capacity.” Again, you can try it up in Azure. So please, continue to
give us your feedback. Please, become a Windows Insider. Admin Center, like I
said, every two weeks, we’re giving new releases in Admin Center and new releases
of Windows Server so you can see what it looks
like and you can download the preview of the next LTSC. Please join the discussion on the Microsoft Tech
Community, and of course, follow us on the Windows Server blog, on Twitter, on Facebook, on LinkedIn. A whole bunch of us
that you’re about to see here in the
Windows Server Summit. We’re all putting
our Twitter handles up there because we’re up
there very very regularly. So at this point, I want
to remind everyone, again, don’t forget we
got the contest going on. So If you want to get in into the raffle for a chance to
win one of the five Xboxes, or 10 Surface Headphones, or equivalent gift certificate, we’ve got that covered for as well. So with that, we’re going to
get ready for the next one. So thank you very
much and we hope you enjoy the Windows Server Summit.>>Thank you very much.>>Coming up next, let’s see a quick interview with some
of our Windows developers. Take it away, Cosmos.>>Hello and welcome to Microsoft’s headquarters
in Redmond, Washington. We’re here to meet
the team, come with me. [MUSIC] This is Microsoft Studio A. This is where the members of the Windows Server Engineering team
come to work every day to turn your feedback into new and improved features.
Let’s drop in. [MUSIC]>>Hey, Julia, you got a second?>>Yeah. Hey.>>So we’re here with the audience
of the Windows Server Summit, and they sent us
some great questions. So we’re trying to find
the folks who can tell us the answers. Do you
think you can help us out?>>Sure.>>All right. So we had a question
about Windows Admin Center. It seems like it’s a huge
engineering focus from Microsoft, lot of resources being put into it, and the question is: For
a user interface code, UI code, is it harder to test
than other types of features, or how do you guys test it?>>Right. So testing
UI isn’t hard at all. We have a automated
UI testing system. I can actually show you
how it works right now.>>That would be awesome. Yeah.>>So what it does, is that
it will open a browser for you and start like tell it that
I want to click this button, and then that button,
and then that button, and it will go through user scenario
for you automatically. See it looks like someone’s typing here and
I’m not doing anything?>>Wow.>>Yeah.>>That is neat. So I guess
this let’s you really do end-to-end exactly what
a user would try, right?>>Yeah, for sure. This is
helpful for that kind of stuff that’s harder to test
individually in the code. It’s things that a user
would do in a scenario, we call it scenario testing, and you can target different skus
of Windows Server.>>So you can do like
2012 or R2, 2016, 2019.>>Yeah. It helps us test like
different environments completely.>>Do you run this every day?>>Yeah. We run this every
time we make a code change, before we actually add it to the common code that
we’re going to release. We run this to make
sure we’re not breaking anything along the way.
So it’s super helpful.>>It seems like a good thing.>>Yeah.>>All right. Well, thank
you for showing us this.>>Yeah, cool.>>Now, before we go
back to the studio, if you’d like a chance
to win those prizes, it’s time for the Knowledge Check. [MUSIC]>>Hi, I’m Cosmos from
the Core OS Storage team. Let’s talk about
Hyper-Converged Infrastructure. HCI is rapidly becoming the way that organizations modernize
on-premises infrastructure, by consolidating from proprietary dedicated storage arrays
and network appliances, to simply industry-standard
servers that bring their own software-defined
storage and networking. Organizations around
the world are lowering costs, increasing performance, and
simplifying their operations. In fact, according to research by
the Enterprise Strategy Group, 54 percent of organizations with a mature data
center modernization strategy, expect that deploying Hyper-Converged
Infrastructure will be among their most significant IT projects
for the next 12-18 months. That’s not just aspirational talk,
it’s already happening. According to data from IDC, Hyper-Converged Infrastructure
investment grew 57 percent last year. It’s now nearly $2
billion per quarter. That’s a lot of spending, and it means a lot of
opportunity for you in IT. It’s an exciting time for on-premises infrastructure
and especially, Hyper-Converged Infrastructure. So how do you get
from the Hyper-V you already have to Hyper-Converged
Infrastructure? The key is Windows Server 2019. The latest version of
Windows Server includes everything you need to deploy and
manage HCI, including Hyper-V, the foundational hypervisor
of the Microsoft Cloud, software-defined storage,
specifically Storage Spaces Direct, software-defined networking,
and Windows Admin Center, the future of Windows
Server Management which has no additional cost beyond
your Windows licenses. What’s more, together
with our partners, we recently launched the Azure
Stack HCI Solutions program. An Azure Stack HCI solution
combines all of this with pre-validated hardware
that’s designed and tuned especially for
Hyper-Converged Infrastructure. So you get up and running
quickly and smoothly. Another way to look at it, it’s the same
Hyper-Converged compute, storage and networking, with the same hardware testing and
validation criteria as Azure Stack. But instead of being
geared toward running IaaS and PaaS within
Azure consistent portal, it’s a more familiar way to run virtualized apps on-premises
with increased efficiency. Between Windows Server 2019, Windows Admin Center,
and Azure Stack HCI, there is a lot to cover. To help us unpack it, I’d like to introduce my colleague, Greg, from the Core OS
Networking team. Hi, Greg.>>Hi, Cosmos.>>Here’s what we’re going
to do. A lightning round. We’re going to cover
25 things you need to know about HCI in just 25 minutes. It’s going to go fast. So we’re also publishing a blog with accompanying details and
links to documentation, so you can learn more
about anything we cover. All right. Greg, you’re ready?>>I’m ready.>>Let’s do this.>>All right.>>First, we have to start with
the Azure Stack HCI solutions. Launched earlier this year, these are a big deal because
they’ve really unlock all the infrastructure
capabilities of Windows Server 2019 for broad deployment, and on a broader range of compatible
hardware than ever before. In fact, there are already over 70 solutions available
from our 15 partners, covering all corners of the world and available for
purchase right away. Now, you can browse them directly on Microsoft.com in a completely
new Azure Stack HCI Catalog. In your browser, navigate
to Microsoft.com/HCI. That’ll take you to a marketing page with a big blue button
labeled “Catalog.” Click the button, and here, you’ll see a rich store-style
experience where you can browse and filter
all the available solutions. For example, maybe you’re looking
for an HPE solution that’s All-Flash with iWarp networking
and available in Europe. Well, there you have it. You can even click the one
you’re interested in to link directly to the right page
on the partner website, so you can learn more and
engage with their sales team.>>Now, Greg. I noticed the HCI catalog no longer
makes a distinction between HCI standard and HCI premium which used to mean SDN. So what’s changed?>>Well, Cosmos. Now
all HCI solutions include what is required for SDN. Now, it doesn’t mean
you have to devote the entire infrastructure to SDN. You can have networking with your VLAN configurations coexist side-by-side with SDN
on the same hardware. Let’s take a look. You can go
into Windows Admin Center, select “Network” or
”Virtual network” and your machine will get
whatever settings it needs.>>Now, that’s great,
but there’s still the small matter of
actually deploying SDN, right? How do I do that?>>Well, that’s easier than ever. With SDN Express,
you can go to GitHub download a set of scripts
called SDN Express, then you run SDN Express and you’ll get a very helpful wizard
that will walk you through all the steps you need
to get SDN up and running in probably about
30 minutes or less.>>Now, we can’t talk
about what’s new for hyperconverged infrastructure
without talking about the most visible
and obvious thing and that’s Windows Admin Center, the future of Windows
Server Management and certainly the future of hyperconverged
infrastructure management. In fact, in Windows Admin Center, there’s a rich set of dedicated
screens that are especially for managing storage spaces direct and software
defined networking. You can easily do things
like provisioned volumes, monitor storage spaces jobs, get a view of your
virtual machines across the whole cluster including
their resource consumption, and dive into troubleshooting
your hardware with rich information about
servers and drives.>>With SDN, you can configure
your Virtual Networks, you can configure access control
lists instead of the gateways that your applications
need in order to get outside of their virtual
networks as well.>>When Storage Spaces Direct first launched in Windows Server 2016, by far the top feedback was a
request for a better user interface, and that’s what we’ve delivered
with Windows Admin Center. The next most requested feature was deduplication and compression
for the resilient file system, ReFS, Microsoft’s recommended file system for hyperconverged
infrastructure. Deduplication and compression is
a technology that saves you space by identifying duplicate portions of files and then only
storing them once. The savings you can expect
depend on what you’re storing, but they can range from
30 percent for videos and music, all the way up to
about 90 percent with highly repetitive workloads
like ISO files, VHD files, and especially
backups of those files. To make that clear,
90 percent savings means you get up to 10 times more usable
storage capacity for free. It’s easier than ever to
turn on deduplication, it’s just a single click of a rocker switch in
the Windows Admin Center. Sometimes, even with features
like data deduplication, you just need a lot of
raw storage capacity. This is especially true for use
cases like backup and archival. In Windows Server 2016, the maximum storage capacity in
a single cluster was one petabyte. In Windows Server 2019, that has increased by a factor
of four to four petabytes. To put that in perspective, that’s enough space to store all
of Wikipedia in every language, with complete edit history,
uncompressed, 50 times. That’s not just a theoretical number. In fact, at Microsoft
Ignite last fall, we partnered with our friends
at QCT to build such a system, with eight of their biggest
for-you Rack Mount servers. We built what we believe is the largest storage spaces
direct cluster ever outside of a public Cloud
at very nearly four petabytes. With Azure Stack HCI, you can deploy anywhere from 2 to 16 server nodes in a single cluster. You can always start small with
say four and then add a fifth, sixth, seventh, and so on, to scale with the needs
of your organization. But what if you want to deploy hundreds or thousands
of server nodes? Well, with Windows Server
2019, now you can. Suppose we start with
these eight servers in a cluster. What’s new in Windows
Server 2019 is we can encapsulate this cluster in
something called a cluster set, and you guessed it, we can add additional clusters
into the same cluster set. What’s important is this cluster set will present a unified
storage name-space, which means a virtual machine
running on one cluster can seamlessly live migrate to
a host in a different cluster, and continue to access its storage even though
it’s storage stayed behind.>>In 2016, we shipped
SDN and we heard a lot of feedback from
customers that they want to have faster gateways. So we’ve worked to
improve in Windows Server 2019, the gateway performance. In many cases, we’ve improved
by over three times. If you have enough connections, you can go from four gigabits
per second up to 18 gigabits per second
through a single SDN gateway. This is for GRE tunneling. One of the really good use cases
for GRE tunneling is in connecting two
network controllers that are running in different sites, so they can connect their virtual networks and have
the workloads running in each, talk to each other as
if they’re one network.>>With each release,
Windows server gets more scalable. The numbers get bigger. It’s not just about capacity, it’s also about performance. Windows server is on the leading
edge of x86 hardware innovation, consistently one of
the first operating systems, and hypervisors to support new hardware technology like the latest Intel
Xeon Scalable processors, Remote Direct Memory Access
RDMA networking, NVME drives, and now Intel Optane, including Intel Optane
DC persistent memory. These are 3-D cross point-based persistent storage
that’s DDR four pin compatible, meaning it goes into a memory socket. Last fall at Microsoft Ignite, we teamed up with Intel and
built a 12 node cluster packed with Intel obtained
DC persistent memory and we used it to set the HCI
industry record with over 13.5 million IOPs from
a single cluster.>>So it’s not just about
hardware enablement, but we’ve also been making a lot of improvements to
the networking stack as well, the benefit either the host or
the guest and in some cases both. We made an assortment of
feature improvements both to TCPIP and UDP performance, nearly doubling the performance. We implemented receive side
coalescing in the virtual switch, which gives you a great
improvement in throughput while reducing the amount of
CPU utilization at the same time. We changed congestion
providers for TCP, defaulting to cubic which
gives you higher performance across high bandwidth
but high-latency links. Finally, for the guest, we implemented the Data Plane
Developer Kit or DPDK for Windows, which gives applications like
video processing the ability to get really fast access to the packet’s bypassing
the host networking stack.>>Now, it’s not
just the Windows networking team that’s been focused on optimizations. We have been on storage team as well. One example is with mirror
accelerated parody, a technology that allows you to
create a volume that partly uses mirror resiliency and partly uses parody or
racial coding resiliency. This lets you get the best of both, fast writes into the mirror portion, and then maximized capacity through the efficiency
of parody encoding. If we take Windows Server
2016 as the baseline, the performance of mirror
accelerated parody has more than doubled in
Windows Server 2019. With all of that capacity
and performance, you’re going to want
to be able to see it, and with Windows Server 2019 you can. Hyperconverged infrastructure now has built in performance history. So you can easily get
data from an hour ago, yesterday, or last week. There are over 50 Key Performance Counters spending
processor usage memory, networking storage latency,
and much more, that are automatically
aggregated and stored. There is nothing you need to set up, install, or configure, it just works. You can access it in
Windows Admin Center where you’ll notice that the charts
have a time range picker, allowing you to go back in time, and for more advanced scenarios, you can query using PowerShell. You know I can’t
believe we’re more than halfway through and we haven’t even
talked about core Hyper-V yet.>>I know that’s right, we’ve
made improvements to things like Shielded Virtual Machines where we’ve improved it so that even
if you don’t have network access to your VM you
can still connect to it, either through the console or
through PowerShell Direct. We’ve also added the ability
to run Linux inside your Shielded VMs with distributions such as Ubuntu, Redhat or SUSE.>>Now, regardless of whether
your organization has adopted Shielded Virtual
Machines yet or not, it’s important to protect
your Hypervisor host. That’s never been more true
than in the last year, where vulnerabilities like
Spector and Meltdown, have really shined a bright light
on side-channel attacks. In Windows Server 2016, Hyper-V use something called
the Classic Scheduler, which provides a fair share, preemptive round-robin scheduling for virtual processors,
essentially, at random. In Windows Server 2019, there’s a new Hyper-V scheduler type
called the Core Scheduler, that is the new default. This further constraints
virtual processors to the physical core boundaries
further isolating VMs. It’s the default in
Windows Server 2019, but you can actually
use the core scheduler on Windows Server 2016 as well. Microsoft backported it last fall
in a cumulative update. In Windows Admin Center,
under Hyper-V settings, you’ll see a new radio button
called Hypervisor Scheduler Type, which you can switch
from classic to core.>>So, we’ve also been working
to improve web traffic that comes and goes from
a Windows Server Machine. We’ve done this through HTTP/2, a technology that we first
shipped in Windows Server 2016, but we’ve made it better
in Windows Server 2019 by implementing things
such as Connection Coalescing, which allows websites with a common second-level domain to share the same certificate and as
a result the same TCP connection. This gives you fewer
round trips to the server and better overall performance
for your web applications. At the same time, we’ve also improved the Cipher
Suite selection process, which reduces the number of connection failures
while at the same time still enforcing a blacklisted
ciphers that are no longer secure.>>In Windows Server 2019, the core failover clustering
technology gets more secure as well. In particular,
failover clustering will now use exclusively Kerberos or certificate based authentication for all cluster and storage
traffic between nodes. This means the dependency on the NT-LAN Manager or NTLM protocol
is completely removed. There’s no change required by users or deployment
tools to make this work, it’s just the out-of-box behavior. Speaking of clustering. An important part of
the operation of any cluster, is keeping your Windows servers fully patched with
the latest updates, and it’s never been easier than with Cluster Aware Updating for HCI. Cluster-Aware Updating,
is a technology that orchestrates the roll-out of updates across
clustered server notes. Essentially, it takes
the pause, drain, install, restart, and resume workflow, and repeats that across all nodes
in the cluster for you. Now, in Windows Server
2019 it’s even better. It has special integration
with storage spaces to wait after each node restarts
for storage resync to complete, and it more deeply integrates with Windows Update to check
if an update really truly requires a restart and it only pauses and drains nodes if
the update does require it, minimizing disruption
to virtual machines. With Windows Admin Center, you can easily check for
updates and kick-off and updating run with
just a single click with an all new Cluster-Aware Updating
tool that looks almost exactly like the Windows Update tool
for a single machine. Let’s talk about Quorum. When you deploy a cluster in your core data center with
like six or eight or 12 nodes, you don’t really have
to think about Quorum. But our telemetry shows
us that with HCI, you’re more often deploying at
the Edge in branch offices, remote sites, or field installations. Taking advantage of HCI is minimum footprint of just two
servers with four drives each. You don’t even need a switch, you can just wire them back to
back with a crossover cable. In these kinds of two node clusters, thinking about Quorum is essential. In Windows Server 2016, there were two ways that
you could use a witness to provide Quorum to
a two-node cluster. You could use a file share from
another on-premises server, or you could connect to the Azure
Cloud for a Cloud witness. But what about deployments that maybe don’t have any
other on-premises infrastructure and don’t have a reliable connection
to the Internet? Windows Server 2019, introduces
a third option, the USB Witness. Literally just plug a USB key into
a compatible router or switch, and the cluster will
use that for Quorum. Whenever you deploy
a cluster, even a small one, you’re doing it for high
availability, for fault tolerance. Yet, just last year there was no HCI solution available
from any vendor, where the storage could survive multiple simultaneous failures
with just two notes. The reason, is that with
a two-node cluster, storage resiliency is provided
using two-way mirroring, essentially, keeping one copy
of data in each server. This means you can survive a drive failure or you can
survive a node failure, but if both happened
at the same time, your virtual machines go down because they lose access to their storage. This wasn’t great. So our engineering
team took inspiration from an old technique called
RAID 51 or RAID 5+1. The idea is to do parody
resiliency within one server, and then, mirror that across to the other server giving you
parody on the other side as well. This is what’s often called
a Nested RAID Level. In Windows Server 2019, storage spaces has
a new resiliency type, it can now do nested resiliency. This means you can survive multiple simultaneous
storage failures even with just a two-node cluster, that includes a drive failure in
each server at the same time, or a dry failure and
the other server going down, both are totally fine.>>While storage
resiliency is important, it doesn’t eliminate
the need for backups. But for smaller sites
and branch offices, it doesn’t make sense to have costly backup
infrastructure on-premises. But for that, we have
Azure site recovery. Now Azure site recovery, is integrated into
Windows Admin Center with a one-click experience that lets you backup your VMs to Azure where
they’re safely stored.>>Now, typically you don’t
just have one branch office. That’s why you call
them branch offices. Small branch offices
may not have dedicated IT personnel to respond to problems, so you need to monitor the HCI you deploy to all your
branches centrally. The Health Service is
the component in Windows Server that provides the alerts you see on the Windows Admin
Center Dashboard, and now, it integrates
with Azure Monitor. Simply install
the Azure Monitor agent on each server in the cluster, and then when something goes
wrong in any branch office, say a server goes down or
you’re running out of capacity, or perhaps a drive fails, Azure Monitor will send you
an e-mail or SMS notification, showing you all the details
of what’s happened, so you can dispatch someone
from headquarters to respond.>>Now, as you move
more workloads into Azure, the need to connect to Azure
becomes even more important. But when you have
many branch offices, each one with
their own infrastructure. It becomes difficult to deploy site-to-site or express arc
everyone at these sites. So for that we built
Azure Network Adapter. This is an integration into Windows Admin Center that
makes it very easy to connect a single server running pretty much anywhere to an Azure Virtual Gateway, so you can get accessed from that
server into your Azure Files, or your other Azure VMs
running in Azure. To find Azure Network Adapter, go to the network and settings of any server in Windows
Admin Center and just click on the “Azure
Network Adapter” button. Now, when you have a lot of
remote offices or smaller offices, they may not always have a really
fast connection to the Internet. So for these offices, you want to make sure that
any background traffic that’s going between them or your core
datacenter or to Azure, gets a lower priority. For that, we have
a really good technology Windows Server called LEDBaT, which is another congestion provider that will back off these lower priority network flows in order to let the higher priority
traffic take over. When that higher priority
traffic slows down, then the low priority traffic
will pick back up again, usually within a second or two. This is easily enabled
either through PowerShell, or through SCCM for distributing updates just by going to
your distribution point settings, if they’re running
Windows Server 2019. We’re almost out of
time. Speaking of time, for those of you that are
in regulated industries where you need to have
really accurate clocks, sometimes down to
microsecond accuracy, we made a lot of improvements
to get you there by implementing features such
as precision time protocol, software timestamping,
even additional granularity on the clock to get it more accurate. We’ve implemented traceability
which gives you the ability to go in and see the logs
where your clocks were set, so that you can go back and prove that your clock is as
accurate as it needs to be. Finally, we added
leap second support. Cosmos, do you know
that every few years, a second key get added or
removed from the clock?>>So literally just some minute will randomly have an extra seconds.>>Yeah, that’s right.
Take a look at the screen. You’ll see it one in
action right here.>>Whoa, weird.>>There it is, 60 seconds.
You don’t see that every day.>>Finally, number 25. This one’s not a new feature, but it’s an important milestone. Last year around this time, we shared that 10,000 clusters around the world were running
Storage Spaces Direct, far exceeding our wildest
expectations for how quickly you, the Windows Server community
would roll out this technology. Well, one year later, I’m humbled to share that over 25,000 clusters worldwide are now
running Storage Spaces Direct. This is an astonishing rate
of growth since last year. The momentum is just amazing. On behalf of the Windows Server
Engineering team, we want to thank all of you. The community on Twitter, on Slack, on Technet, our wonderful partners, and
of course, our customers. Thank you for your trust
and your business. All right, that is it for
our lightning round. We did it. Twenty five things in 25 minutes. Was this a good format? Should we
do more sessions like this one? Let us know. Tweet us with the hashtag below and
tell us what you think. As you can see, it’s an incredibly exciting time for
on-premises infrastructure, and especially, hyperconverged
infrastructure. Whether you’re deploying
a tiny two-node cluster, or at petabyte scale. Whether you need the best
performance or the best security, or you just want that gorgeous new dashboard in
Windows Admin Center. HCI gets better for everyone
with Windows Server 2019. To get started, find solutions from your preferred hardware vendor
at Microsoft.com/HCI. Install Windows Server 2019
from aka.ms/WindowsServer. Manage with Windows Admin Center, which you can download from
aka.ms/WindowsAdminCenter, and optionally connect to helpful Azure services
that make on-prem better. Be sure to watch the rest of
the Windows Server Summit, including Haley’s session where
she’ll tell you more about that. [MUSIC]>>But first, it’s time to go back behind the scenes and talk with
developers on the windows. [MUSIC]>>Hey, Scott, do you mind if
I bother you for a second?>>Hey, Cosmos, come on in.>>Cool, thank you. So listen, we’re here with the audience
of Windows Server Summit, and we just saw the demo with persistent memory in
the 10 million plus IOPs.>>Yeah.>>So we had a question.>>Okay.>>Wondering if you could
give us an intuition for why it is that these persistent
memory devices are so much faster?>>Sure. So they’re very light, at least the vices here. This is like a big door memory gate. So on any system, you just plug this in
to your DDR4 slot, and now it’ll work.>>So it goes right
next to the processor?>>Yes.>>So there must be
a bandwidth advantage there, in terms of memory bus.>>Definitely, the bandwidth
of a memory bus is a lot higher than something
like a SaaS or SADA, and they’ve been in
all the other cases even for PCI.>>So what’s the protocol that
you use to talk to one of these?>>So for processor memory, we don’t need a protocol. So that’s one of the advantages. You just use the CPU
instruction to do the IaaS.>>That sounds really efficient, no wonder that’s so much faster.>>Yeah.>>Well, folks, Scott won’t tell
you because he’s too humble. But he actually works on the standards group that defines
how these technologies work. So you could not be hearing this
from a more authoritative source. Scott, thank you very much.>>Okay, thank you.>>That means it’s time for round
two of the knowledge check. [MUSIC]>>Hi, I’m Haley Rowland, a Program Manager on the Server
Management Experience Team. Hybrid Cloud is become top of
mind for many of our customers, they’re realizing the business
value that it provides. For example, it reduces
the on-premises footprint, improves IT agility, and
maximizes efficiency. One area where the value
of Hybrid Cloud is prominent is in backup
and disaster recovery. Many businesses are
running today without a real backup or
disaster recovery plan. This puts those businesses at
risk of disaster were to strike. You don’t want to find
yourself in a situation like this without a back-up or
disaster recovery plan, it’ll be too late and you’ll lose your company’s valuable data
and workloads. To protect against
this kind of situation, you could either make
the costly decision of setting up a new data center as a secondary site or you could
use Azure as your failover site. With Azure you know you’re getting the additional security that comes from Azure scale and durability. The latest 1904 release
of Windows Admin Center, we’ve made it easier
than ever to realize the benefits of Azure directly
on your Windows Server. So whether you’re completely
on-premises today, or you’re already leveraging Azure services to extend the
capabilities of your data center, Windows Admin Centers simple
tooling and onboarding experiences get you up
and running in no time. So let’s see how this works. The Azure Hybrid services tool, is a new tool in Windows Admin
Center that allows you to discover, setup, and access Azure
services from one place. By clicking on this discover
Azure services button, I get access to the curated set of services that bring value
to my hybrid environment. Now I already talked about
the importance of having a backup and disaster recovery plan, and to that end Windows Admin Center integrates with Azure Backup and Azure Site Recovery to help protect the workloads that are running
on-premises from disaster. We’ve also looked for opportunities
to take what was previously a difficult setup or configuration experience
from the Azure portal, a more seamless inaccessible
one from Windows Admin Center. A great example of this is
the Azure Network Adapter. This is a feature in
Admin Centers Network tool that allows you to
create a point-to-site VPN to connect your
on-premises servers to resources in an Azure Vnet. To give you an example of how much we simplify this process
in Admin Center, it used to take our best Microsoft networking expert three hours to do this and now anyone can set
it up in just three clicks. Now I’m not going to have time
to cover all these services, but I’d love to show off a
couple in more depth and I’ll get to the other services that I
haven’t mentioned in just a bit. Let’s start with one
of the most common and important use cases of the Cloud, storing and protecting your backups. The Azure Backup tool in Admin
Center helps you set up, manage, and secure your server
backups protecting your Windows servers
against disasters, accidental deletions, corruption
and even Ransomware attacks. So let’s take a look
at how I set this up. First, I’ll need to login to
Azure so that Admin Center can automatically populate
the details of my subscription. Then I’ll just need
to review the smart defaults that are provided here for me about the recovery services vault where my backups will be stored. Next, I skipped a step three where I select what I actually want to backup so system state and my drives, and then I see a convenient estimate
provided there for me. Next step is to choose the backup and
retention schedule that will best meet my backup needs. Finally, I’ll secure my backups
with an encryption passphrase so that my backup data is going to be protected both in
transit and at rest. Now, I can click “Apply”, and Admin Center will automatically
provision resources in Azure and configure my
server for Azure Backup. Let’s go see what this looks
like once it’s actually setup. I have a rich dashboard that gives me access to important information
like my recovery services vault, the latest backup status
and latest backup time. What’s really cool is I can
click on this hyperlink here for the recovery services
vault and it will bring me into the Azure portal. Hear from the Azure portal I get
that centralized management view, giving me a rich
experience for managing Server backups at scale
from a single location. So here are the recovery
services dashboard, I’ll see the backup items that I’m protecting and the storage
that’s being used. I could even come here to retrieve my backups if something were to
happen to my on-premises servers. But let’s go back to Windows
Admin Center where we have a more server
centric view and capabilities. For example, if I’m about to patch a server I might want to
kick off an Ad-Hoc Backup. So I just select whether I want
to back up my files in folders or my system state and then simply
click “Backup”, is that easy. Then if I go over to the jobs page, I can see that backup job that
I just started as well as a history of jobs and any errors
that may have occurred. I can even configure alerts
and notifications so that I can remotely monitor
my servers backups. From the recovery points, I can see a history of all the recovery points and
recover data from the server. Now let’s look at
the enhanced security aspect of the backup tool in Admin Center. Something that admins
do regularly for compliance is clean up
backups as required, and they might want to
delete the backup data. Something that also wants to delete your backup data
is Ransomware. The bad guys have gotten more
sophisticated and are targeting not only your primary data storage
but also your backups, so we’ve thought about that too. When ransomware tries to delete
backup data from your server, it’s not going to
succeed because it needs a security pin and the security pin requires access to
the recovery services vault. As an additional safeguard for you, Azure backup keeps your backups for up to 14 additional days
after deletion, in case of a mistake
or malicious admin. So this way you can
protect your backups and always ensure that you have
multiple recovery points. Now as you’ve seen I’ve set up
the Azure backup on the server, but I’ve also configured
the server to connect to a few other Azure services available
through Windows Admin Center. So now if I go back to
the Azure Hybrid services tool, I’ll see a list of all the services that I’ve connected
to from the server. So this hybrid services
tool now serves as a central hub from
which I can link out to connected Azure Resources in the Azure portal or to the relevant
tool in Windows Admin Center. Plus I can learn about
new integrations that we’re continuing to add
in Windows Admin Center. For example, the newly
added tool Azure File Sync. One common pain point we
often hear is about running out of capacity on
on-premises file servers. You could purchase
more and more storage but that’s a costly proposition. We’ve also heard you
express a need for better and easier data
sharing across sites, and these are exactly
the challenges that Azure File Sync was
designed to address. Let’s say I have a file
server in Seattle, when I purchased it I
thought 20 terabytes would be plenty and now
my storage is getting low. Before I would have had to purchase additional hardware but instead I can set up Azure File Sync
which allows us to tear the least use
files to the Cloud. What this means is that
all the data being used, the hot data stays
local or the data that hasn’t been touched transparently
tiers up into the Cloud. This provides virtually
bottomless storage and your file server
becomes a hot cache. So let’s look at this in
Windows Admin Center. To begin setup, I’ll need to
install the Azure File Sync agent.>>Here you can see
the installation directory, and I can configure how I
want to update the agent. If I use a proxy in my environment,
we also support that. But I’m going to leave everything as is and go ahead and
deploy that agent. But while that continues, let’s switch to
another file server where the Azure file sync agent
is already installed. You’ll notice that it provides the installed and latest versions of that agent for my convenience. But I’m ready to continue
on to the next step where I’ll configure
the subscription information, the resource group
that I want to use, and the storage sync service that I want to register the server to. The next step is to actually register the server with the Azure
storage sync service. So I just simply click on
“Register” and Windows Admin Center will automatically
establish the trust relationship between these resources. So at this point, I’ve deployed Azure file sync agent on
my file server remotely, connect a storage sync service with my Azure
subscription information, and registered the file server with the Azure Storage Sync Service. Once I finished setup, I can see Azure file sync
configured for this file server. But because we just
set the server up, I don’t have any shares
synced it just yet. So let’s head over to the next tab
where I actually have a file server configured with
shares tiering to the Cloud. We can see my local agent
is up-to-date, what sync services this
server is registered to, and I have an intelligent hyperlink
to the Azure portal. Now, if I take a look at the
server end points below, I can see all the file
shares sync to Azure, whether Cloud tiering is enabled, and what the tiering policy is, for example, 20 percent. If I want to see
a specific sync group, I can click on it to take me into
the exact right spot in Azure. Now, I mentioned at the outset
that Azure file sync does more than free up file capacity on
your on-premises file servers. It actually helps you sync data across file shares in
different locations. The great thing about
Azure file sync is when you sync your file
server to the Cloud, it provides the ability to sync
across all your branch offices. So my files that I
saved to the Fileshare in Seattle get uploaded via Azure file sync and then shared to the other offices in New
York or Tokyo, for example. If I also have
some Azure PaaS services or ISVNs that need access to
a data on a synced Fileshare, because everything
already resides in Azure, you can directly access that data in Azure and you don’t need to use a precious network bandwidth
going to a branch office. Having multiple branch offices or remote locations is common
among our customers, whether they’re managing
several schools across district, multiple stores in a chain or otherwise have servers
in many locations. The problem you encounter
in any of these cases is how to monitor and manage your servers across all
of your environments. Historically, you may have
needed to shell out big money to an IT company to manage
all of these environments, but now you can fill this gap
with Azure Management Services. Using robust Azure Monitor and Azure automation solutions like Virtual Machine Insights
for Azure Monitor, Azure update management,
and Azure Security Center, you can monitor your on-premises or Cloud servers centrally
from the Azure portal. Windows Admin Center makes it
easier than ever to attach your on-premises servers to a Log Analytics Workspace and
Azure Automation account, which are the resources
in Azure that light up these Azure Management Services. So let’s say that I’m the IT admin of a school district
and I’d like to get an e-mail if any of
the servers that I manage at three of my different
locations goes down. Let’s take a look at how I’d
set that up with Admin Center. I’ll connect remotely to one of
the servers in Admin Center, and then I can click
this new manage alerts button. From here, I have a pretty typical
Azure onboarding experience where I need to provide
the subscription information, the resource group
that I want to use, and then the Log Analytics
Workspace that I’ll either create or use an existing one. Admin Center automatically installs the necessary agent on the server, connects it to
the resources in Azure, and configures the server to send common performance counters
to my workspace in Azure. It also installs
the Virtual Machine Insights solution in my workspace. But I’ll talk about
that in just a moment. So once this is all set up, I can use the intelligent
hyperlinks provided here to launch out
into the Azure portal, to see the Log Analytics workspace that I’ve just attached
this server to. The Log Analytics Workspace serves at the central repository where I
can connect all my servers both on-premises and in the Cloud and see the logs that are
collected into this workspace. So to configure an alert, I can run a predefined
query, in this case, I want to know about the heartbeat of all the servers or I can
create a custom query, and then I’ll just go and create
a new alert based on that query. A lot of these fields
are already populated, but I’ll need to define
an alert condition. In this case, I care about whether any of
my three servers goes down, so I will make sure that I’m getting at least heartbeats from
three different computers. So I want to trigger an alert, if any of those three goes down. Next, I’ll need to
configure an action but I don’t have any of those setup. So lets hop over into the manage actions portion where
I can create a new action group. Now, many of our Windows Admin
Center users have asked us for e-mail notifications but
Azure Monitor actually has a really robust framework that
allows us to do so much more. Yes, I can configure
voice text or e-mail alerts, but I can also do remediation actions like Azure Functions or
automation run books. In this case, I’ll just create an e-mail alert and
provide my e-mail. I’ll go ahead and click “Okay”, and then I’ll just create
that action group. Next, we’ll head back into the
alert that I was creating, and I’ll just need to refresh
this pane so that I can see that new action group
that I just created. There it is. I’ll go ahead and
select it and click “Okay”. The final step is to just
give my alert a name, so I’ll just call it server
heartbeat done so that I know what to do if I receive
an e-mail alert about this. I’ll give it a description and
then a severity level. That’s it. I just create the alert rule and
I can rest easy knowing that if any one of my servers goes
down, I’ll get an e-mail. If I go check my e-mail, I can see that one of
my servers did indeed go down and stopped sending a heartbeat, so I’ll need to drill down later
on to figure out what’s going on. So I mentioned a little earlier
that Windows Admin Center installs the Virtual Machine
Insights solution for Azure Monitor when you onboard
a server into a workspace. That’s because it provides so much
more than just e-mail alerting. You’ll note that though
this Azure Monitor solution is named Virtual Machine Insights, it actually works for
on-premises servers as well. From the Azure portal, I can see an aggregated view
of performance counters across all my servers both those
running on-premises or in Azure. With the map feature, I can actually drill down and look at the connectivity of
one of my servers. Looking at the ports that
they’re communicating over and how the traffic is flowing
between different server endpoints. I’m only just scratching the surface of what
Azure Monitor can do. Windows Admin Center
makes it really easy to take the first step
and set this all up. So what you just saw
was Azure Monitor, it’s an Azure Management Service; and another Azure Management
Service, is Update Management. Whereas, Monitor allowed you to view aggregate charts and create alert centrally across
all of your servers, Update Management
allows you to roll out updates across all of your servers; and Windows Admin Center provides a similar set-up experience
to that of Azure Monitor, with intelligent links that
link out to the Azure portal to centrally manage updates across all the servers in
your hybrid environment. Azure Security Center, is a third solution available in
Azure Management Services, that gives you a unified
security and management and advanced threat protection across
all of your hybrid workloads. We’re currently building
integration in Windows Admin Center to make it easy to set up from
your on-premises servers, but Security Center is available
from the Azure portal today. These three solutions: Azure Monitor, Azure Update Management
and Azure Security Center, allow you to centrally manage
your hybrid environment from Azure; and Windows Admin Center
makes the setup and configuration from your
on-premises servers, seamless. But we know that using
the Azure to monitor and manage your servers isn’t
a possibility for all of you. There are legitimate reasons
why you might not want to or be able to connect
servers to Azure Management, and that’s why System Center
continues to play an important role for managing and monitoring environments at scale. As a comprehensive suite
of solutions, System Center provides
additional value across your environment
and platforms, whether those are completely
disconnected from the Internet or live
in a hybrid state. Windows Admin Center compliments both Azure Management Services and System Center as
the remote management tool that gives deep single server and
single cluster drill down, for troubleshooting
configuration and maintenance. In the next segment, you’ll hear about
all the great innovations added to System Center 2019. But before I close, I
want to let you know that there’s so much more that we’ve added into Windows Admin Center beyond the hybrid capabilities
that I’ve discussed. So I highly encourage you to check
out Daniel’s on-demand session, for a deep dive on
Windows Admin Center where he’ll cover new features
that we’ve built, as well as highlight some of
the powerful extensions built by our third party partners in the emerging Windows Admin
Center extension ecosystem. The next session coming up, is what’s new in System Center 2019. Remember to take
your knowledge check, but first, let’s hear from Cosmos talking with the developers
behind Windows Server. [MUSIC]>>Hey Omar, you’ve got a sec?>>Sure, yeah.>>So we are here
with the audience of the Windows Server summit and they sent us some really great questions. So we’re trying to find
the folks who can get us some answers. You think
you can help us out?>>Of course, sure.
Come in, have a seat.>>Thank you. So we just heard
from Haley all about how Microsoft is focusing more and more on bridging between
Windows Server and Azure.>>Okay.>>Now, it seems like
for that to work, you would want a similar software
architecture on both sides. So the question we got was, can you tell us a little bit about
how Azure uses Windows Server. Is it totally different technology
or is it the same?>>Sure, yeah. So technologies
are pretty much the same. It’s a standard Windows Server, it has some slight modifications, nothing major, and
the Virtualization Stack, the Hypervisor, they’re
essentially all the same. All the same features that you use On-Prem you’d also use on Azure. Of course, they augment it with additional features,
additional software, but most of that core content really accrues all the way over to On-Prem, Private Cloud and Windows in general.>>Let’s say, your team which works on the Windows Server networking and you’re also I guess working on Azure networking at the same time?>>Yes we are. They have
their own networking team. We collaborate very
closely with them. We do all the requests
consultations, gathering and
co-engineering with them. Yeah, the technologies
that we build is in terms of like
the Virtualization Stack, the vSwitch the Netvsc device, exotic offloads like RDMA and SR-IOV, all of those are things that
we developed here and are ultimately consumed by the team
over there and augmented further.>>So if I’m a customer and I’m
thinking about deploying RDMA, that’s something that
Azure is using too.>>Absolutely, a very heavy user. Probably one of
the most advanced users of all the big public Clouds. So Azure Storage for example, the entire back-end is
really based off of RDMA clustering for
ultimate performance. RDMA storage to compute hosts, as in those hosts that are hosting your VMs in
the tenant workloads, those are also powered by
RDMA to the compute hosts. In general, it’s the same technology, it’s the same capabilities
that we provide On-Prem. So when you’re running things
like S2D or similar technologies On-Prem it has to be direct. It’s pretty much
the same capabilities.>>That is super cool. That’s the kind of insight you can’t get without talking to the team. All right, thank you very much.>>All right.>>Now, you know what that means, it’s time for the Knowledge Check. [MUSIC]>>Hi, I’m Hitesh, Senior Program Manager for
Microsoft System Center. Thanks for joining
me in learning about what’s new in System Center 2019. First of all, thank you for being such a great group of customers
to work with over several years. Today, I’m going to talk about
the new and improved System Center 2019 which was made available to
you in the month of March, 2019. System Center 2019, is a long-term service Standard
release which offers five years of mainstream and five years of extended support
to the customers. As we continue to enhance
the System Center Suite, we also look forward to
the next version of System Center which will come on the back of
the next version of Windows Server. For the Windows Server customers, System Center Suite
has always offered the complete set of tools that are needed to manage their data centers. The whole ambit of
data center management, ranging from deployment,
monitoring and automation, is served by the respective products
in the System Center suite, which have matured and
evolved over many years along with evolution of
the Windows Server platform. Products such as Operations Manager,
Virtual Machine Manager, and Data Protection Manager provide built-in integrations
with Microsoft Azure, making it easier for
customers to leverage the cutting edge services of Azure and plan the migration
with the Cloud. Before I start to dig deeper
into individual products, I have an announcement to make with
respect to System Center 2016. System Center 2016 will
support Windows Server 2008 and Windows Server 2008
R2 extended security updates, which were recently announced. Let me now take you through the new features in
Operations Manager 2019. Monitoring mission
critical infrastructure, workloads and applications is becoming more and more
relevant than ever, with customers scaling
their deployments to support growing business needs. System Center Operations Manager
has kept pace with the technology landscape
by supporting monitoring capabilities for a variety of resources and applications, both On-premises and
hybrid environments. The following is a list of new capabilities the SCOM
2019 offers to the world. We’ll touch upon a few of these
during the session and deep dive into more in the deeper session of System Center 2019. Let’s begin. Businesses across the globe are increasingly adapting
to Hybrid Cloud, where they continue
their investments in the On-premise
infrastructure and tools and leverage the cutting
edge services in Azure. While the strategy enables enterprises to be more
flexible and efficient, it also arises a need
for a single pane of glass for monitoring the health
of such hybrid deployments. Azure Management Pack enables
Operations Manager to become this single pane of glass by integrating with
management services in Azure. Let us see how you can take
advantage of both worlds. What you see here,
is the Azure portal. So I have created a Demo Application in Azure,
in Application Insight, which is hosted on Azure Portal, and now you can see that there are certain performance metrics
which are already being collected. So I will show you how you can bring in alerts and these performance metrics into these
console, from Azure. These are the alerts that
I have already configured. As you can see, the signal types
are listed there. So you can see
metrics alerts are there, and there are log search alerts, which are also called cyclical rules. Here you can see that I have already configured my Azure workspace
in the SCOM console. It’s already documented. In the interest of time,
I’m just skipping it and you can easily go to the
documentation for doing that. Now, let us see what service types
you’ll want to monitor here. So we have checked few service types
that we need to monitor. I will now go to the metrics which
I want to collect the data for. So we will collect
the data for some metrics. For example, for the
server response time, we will collect the data
and also for availability. Let me now show you how
does this look like, Azure health looks like, in the monitoring bin of the web console of
systems center Operations Manager. Here you can see that
our application is critical. If you see, you can also see
some performance data here. Let’s just dwell more into it. So our application is critical, which is very much
visible in this console. If you go to the performance graph, we can even see the availability, failed requests and
server response time. Let us quickly go to
server response time here. Here we see the same
performance graph which was also visible to
us in the Azure portal. We can also view the same thing
in the SCOM desktop console. Let me now move on to the
Virtual Machine Manager 2019. On the deployment and management capability enhancements
in System Center, let’s talk about how new features in Virtual Machine Manager
are helping deployment of hyperconverge infrastructure
and the new capability introduced for managing
hybrid environments. Before this, let us glance through the new features that are
available in VMM 2019, where we have made
significant investment in the areas ranging
from supporting SCI, enabling hybrid capabilities to improving the performance
and security of the product. Hyperconverged
infrastructure adoption is seeing significant growth
in the past few years. We see a trend of customers shifting from traditional server
and sand to SCI, which helps them lower the total cost of ownership
on their deployment. VMM simplifies the deployment
and management of SCI clusters. There are many feature deletions
to support S2D in VMM, customers can deploy, update, and upgrade S2D clusters
using VMM 2019. Storage in an SCI cluster is built out of
commodity storage devices. Commodity storage device has a greater chance of failure
compared to a traditional storage. VMM 2019 helps admins
monitor the health status of physical and virtual storage
components deployed using S2D. Users can monitor health and
operational state of physical disk, storage pools created
on those physical disk, and logical units
created on the pools. Knowledge of the health and operational state will help administrators choose
and resolve issues. Let us see how this can
be achieved in VMM. VMM Server fetches
operational and health data from Windows Server
2019 storage provider cache. A refresh is done by default every two hours to
update the VMM database. However, the refresh time can be modified using
the registry details. Let me show you how this
looks in the VMM console. Just head to the “Fabric” pane in the VMM console and look for
Classification and Pools. Here you can see the health
status of this pool. When we expand it, you will see that we have the physical disks there
and also the logical units. Let us try to click on one of them. On clicking through a physical disk, you can see the health status down below and the
operational status as well, which is currently
showing maintenance mode. So this is how you can
monitor the health of your storage is in
Virtual Machine Manager. Let’s switch gears and now talk about System Center
Data Protection Manager. Whether applications are
in Azure or on-premises, enterprises need
a good business continuity and disaster recovery strategy
for their application. A good BCDR solutions make sure that the applications
are always available, and if something goes wrong, these applications can
be recovered and made online within their
recovery time objective. Systems in the
Data Protection Manager is an enterprise where BCDR solution
that provides couple of things. It protects your data
center workloads at scale. It is capable of
application aware backups, be it SQL workloads,
Exchange or SharePoint. DPM can also be integrated with Azure backup for
a hybrid backup strategy. DPM provides you with the powerful capabilities in monitoring, reporting,
and automation. Data Protection Manager can backup your workloads to disk storage
for short-term protection, and enterprises can use
either one of these options. They can either go to a tape storage or Microsoft Azure for their
long-term production needs. While this flexibility is
great for large enterprises, if you are a small or a medium enterprise and prefer
Azure for long-term production, and do not want to worry about the complexity and cost of
maintaining tape storage, then Microsoft Azure Backup
Server also known as MABS, is a great solution for you. MABS does not need a System Center
license and is free to own. It uses a pay-as-you-go model and
the cost is based on your usage. MABS V3 was released
in November 2018. Let us look at what’s
new in DPM 2019. DPM 2019 brings with it significant performance
improvements to backup time. It supports newer
workloads of Windows, SQL, SharePoint and Exchange. If you use VMware to
host your workloads, V now support backing up your
VMware workloads to tape storage. We also backup these workloads
in parallel for faster backups. You will also be able
to take advantage of the rich analytics and monitoring capabilities in Azure
to monitor your DPM servers. Let us now take a deeper look into the performance enhancements
that we have made in DPM 2019. DPM 2016 introduced Modern
Backup Storage also called MBS, which uses the Resilient File System. ReFS was a significant
improvement over NT file system, since it offered great scalability
performance and helped with data integrity and allowed
significant space savings. DPM 2016 with MBS provided 30-40
button space savings and backups. They were 70 percent faster
as compared to DPM with NTFS. Now, let us talk about the
performance improvements in DPM 2019. Well, you would be happy
to know that we have achieved over 95 percent
improvement in backup time as compared to DPM 2016. We improved the
cloning engine in DPM. Now multiple file regions
are cloned in parallel to reduce the time
required to create recovery points. ReFS is designed to take
advantage of tiered storage. But this means that
if a storage pool is made up of solid-state drives
and hard disk drives, SSDs are used as a cache layer. DPM 2019 now takes advantage of
this resulting in faster backups. We have also made significant
improvements to the ReFS in Windows Server 2019 resulting
in performance gains. Well, from our test backups with DPM are 95 percent faster to DPM 2016. Let me switch gears and talk about
the hybrid capabilities of DPM. One such capability allows you
to register DPM with Azure so that you can use Azure for
your long-term protection needs. Let me show you how it’s done. This is a four-step process. The first one is to create
a recovery services vault, which I have already done
in the interest of time. Now, we need to download the MARS agent and the Vault
Credentials to the DPM server. Head on to the recovery services
vault, click on “Backup”, and now select on-premises and the Hyper-V VM that we
really want to backup. Once this is done, we will download the MARS agent and the Vault Credentials.
This is how it is done. Download the MARS agent, and now download
the Vault Credentials. These credentials
would be used later. Now, we need to
register Azure with DPM for long-term protection.
This is how it’s done. Go to DPM console, click on the “Management” pane
and click on “Online”. Now, we click on “Resistor”
and click “Next”. Here, we need to select the Vault Credentials that
we had downloaded earlier. As we move forward, what we need to do is that, we need to create
a passphrase for our backups. Click on “Generate passphrase”. This will take some time to
register with Azure Backup. We now see that the registration
has completed successfully. We now move on to the last step, which is creating a
protection group with Azure as long-term retention target. This is how it’s done, go to protection in dictum console,
and click on “New”. We now select our servers
for the production group, select a Hyper-V VM
in the S2D cluster. We now follow on a couple of steps until we close
this wizard, before that, we are selecting Disk as our short-term protection and Azure for the online
long-term protection. Go through the steps and close
the wizard successfully. We now see that we have
successfully able to register DPM with Azure for
long-term protection needs. Now, let’s switch and talk about other products in the
Systems Center Suite. Orchestrator and Service Manager. We understand that partners
and customers have invested significantly and rely on both
Orchestrator and Service Manager. We want to emphasize that
we are continuing to plan improvements that add
value to these products. Systems Center 2019 includes the next round of updates for both Orchestrator
and Service Manager. Orchestrator 2019 supports
PowerShell version 4 and above and enables the use of 64-bit PowerShell commandlets
in your runbooks. Service Manager 2019
has improvements in the AD connector to connect to
a specific domain controller, as well as improvement in
the reliability of the console. The Service Manager 2019
and Orchestrator 2019, our customers can carry forward
the investment they have today knowing they have the support
of Microsoft behind them. If you have a hybrid
environment with Azure, we recommend taking a look at
automation capabilities in Azure, for example, Azure Automation. Azure Automation supports On-
Premises and Cloud environments and Microsoft is currently
looking at ways to integrate Azure Automation
with Service Manager. If you’re looking
for additional value beyond what is available in
the Systems Center Service Manager, then you can explore
partner solutions which augment native Service
Manager functionality. Some of which require
no additional cost. System Center has
a thriving ecosystem of partners, and I would like to
thank them for creating amazing solutions of
our System Center Suite of products. I really hope that the overview
of System Center 2019 was helpful for you and now you’re ready to explore System Center 2019. Here are some resources
that can help you get started with System Center 2019. Well, now the next
session is coming up is three ways to modernize
Windows Server Apps. Remember to take a knowledge check, but first let’s hear from Cosmos talking with the developers
behind Window Servers. [MUSIC]>>Hello, and welcome back to campus. We just heard from
the System Center team all about Virtual Machine Manager, and who better to talk with about those virtual machines than the Virtual PC guy himself,
Mr. Ben Armstrong. Ben, thanks for being here.>>Good to be here, Cosmos.>>Now, in the context of the spectra and meltdowns
vulnerabilities, there’s more focus than ever on using virtualization as a way
of isolating workloads. So what’s new in Windows Server
2019 for isolating workloads?>>This is actually
a super important topic because these are huge issues for our
customers and during Server 2019, we actually went in and we made some changes to
the underpinnings of Hyper-V, to change the way
virtual machines is scheduled. To help ensure that customers’
workloads are always protected.>>So you introduced
a new scheduler type, is that right?>>Yes, the Core Scheduler. So we now have what we called
the Classic and the Core. Classic is what we’ve
always had, with core, what we actually do
is when you create a virtual machine with
multiple processors, we make sure these virtual
processors are all put together on the same
physical piece of hardware. So if you have
a multi-core processor, we put all those virtual processors
in the same place. So if someone were to
try and use one of these exploits to gain information, they would just be
looking at themselves.>>So it sort of further
constraints the placement of the virtual processors to the actual physical boundaries
of the hardware?>>Yeah, and the great thing
is this is all transparent to the user,
they don’t see a thing.>>So if someone wants
to use this with Windows Server 2016,
can they do that?>>Yes, we did actually also
move that change back to 2016, it’s not turned on by default. So it’s a bit more complicated to set up and there are some caveats, but in Server 2019 it all just works.>>That is fascinating stuff. Ben, I know you’re
a busy guy, so we’ll let you get back to it.
Thank you very much.>>Yeah, happy to be here.>>Now, before we wrap up it’s time
for one final knowledge check. [MUSIC]>>Hi, I’m Taylor Brown. I hope you’re enjoying
the Windows Server Summit today. For those of you who know me, you know we’re going to be
talking about containers. We’re going to talk about
how we can modernize our applications using containers. For those of you who haven’t
had a chance to meet me yet, we haven’t had a chance to talk about how we can take advantage of
this great new innovation of containers with Windows Server in our existing applications,
you’re in for a treat. We’re going to go through
some ways to do that. We’re going to see a couple of demos. It’s going to be great. Let’s go ahead and just dig on
in and get started. It’s probably of no big surprise to those of you out there but ASP.NET and .NET continue to
be among the most top used frameworks for
developers even today. This data came from Stackoverflow. Stackoverflow we know has
a little bit of a bias towards those new application frameworks
like Node and .NET, but of course, ASP.NET
still top and .NET still one of the top frameworks being used for new development today. Now, that doesn’t even account for all the existing applications
that we have out there. So how do we take advantage of those applications
and modernize them, bring them into the new paradigm. So we’re going to talk
about a pattern today. We’re going to take
that existing application. We’re going to convert it into
a container that’s going to allow us to leverage
these modern methodologies, CICD, and active deployments,
and deploy every day, and all of those kind of things
on modern infrastructure. That infrastructure could
be Server 2019, on-prem, it could be in the Cloud, it could be in Azure, of course. We’re going to give our developers
the opportunity to leverage these modern micro-service patterns to take advantage of
those new paradigms. So let’s dig right in, got
a typical enterprise application. I’d ask for a show of hands
but we’re in a virtual event. So it’s hard for me to
see all your hands up, but how many people
have an application or a lot applications
that look just like this. We’ve got some sort of
winform app or WPF front end. Some maybe it’s a WebApp. Then, there’s some middle tier. It’s a set of Web Services
or WCF or who knows what? Then, it’s talking to some back end, typically like a SQL Server or maybe a File Server as
the kind of back end. Super common. How do we
modernize this application? Well, we’re going to containerize it. We’re going to package it up. So on the front end, we can use the packaging technologies that we’ve got a Windows 10 MSIX to make
that easily redeployable, but we’re not going to talk
too much about that today. There’s some great resources
out there for you. Super easy to do. By the way, it’s using
containers in the back end. So it’s all the same stuff
under the covers. For that middle tier, we’re
going to talk about how we can use Windows Server
containers today. So we can package
that application up, make it easy to redeploy it, easy to move it around, easy to rebuild it, patch it, all of the great
new management capabilities. We’ll talk a little bit about how
we can do the same thing with our back end either leveraging
containers and packaged up SQL. We’ve got a great resource
at the end for a private preview of SQL Server or we can take advantage
of a PaaS service if we’re using the Cloud
like, of course Azure. So what does it mean to containerize? Well, typically, this has
meant to package up freight. When we talk about
containerizing applications, it’s really the same thing. We’re putting that application
into a uniform box. It’s going to have the same verbs, the ways to describe it, memory, and resources, and storage. Now, we can treat all of our applications the same
from a management standpoint. I can start up one application
by just saying, “Docker, run that application.” I can sort of another one
with the exact same thing. As long as I give it
the information it needs like where your storage is going to
be located in the resources, it will run exactly the same way. So it’s hugely advantageous for us as both Developers and Admins
working on applications. So here’s our traditional
architecture. We’ve got an application, can be written in.NET,
could be Win32 or Java. We can have a Windows service. So again, that service can
be written in anything. It’s just a long running
set of processes. Then, of course, we’ve got
the underlying kernel, the kernel that provides
the driver support, and turns the file called
“Open This Files”, and the sound and network into bits and stuff that
goes out on the wire. When we containerized an application, all we’re really doing is we’re
taking those top two buckets, the applications or the services, and we’re packaged it into that
container that uniform box. The kernel stays the same. Now, we can run multiple containers on the same kernel side-by-side. The applications are unaware
of each other because they’re each in their own
shipping container there. They’re own box, their own
steel box side-by-side. From the kernel standpoint, we know how to talk about them. From an Admin standpoint, we know how to just describe that I want to run this application
and that application. So this is all a little bit abstract. Let’s just go ahead and see this in action to give us a little bit of a concrete example of how
this works in the real world. So here we are. I’ve
got a virtual machine. I just happen to
deploy this in Azure, leveraging our Windows Server
with containers image, that’s an image that
we build in a gallery. It’s just got Docker installed in
the container I just pulled in. If I want to start a container, all I do is say, “Docker run.” So this is Docker’s the tool
we use to start containers. A past and a couple of
flags, this rn just says, “Further container away
when I’m done with it.” -it says, “I want to interact with it. I’m going to do not because
we’re doing a demo here.” Then, this next part is
the image that I want to use. So I’m getting this image from
our Microsoft Container Registry. I’m using the Windows Image, and I’m using Server Core from our long-term servicing channel 2019, and the process I want
to run it is command. So here we go, firing that up, and we’ve started a container. So we’re now inside this container. It’s got its own file system. It’s really a contained environment. If we look at Task Manager over here, we see that there’s a new
session that got started up and a new job object that
got associated with it. So if I do something like
ping -t www.Microsoft.com, we see that ping is now running. It’s running in session three
with this Java object code. If I stop ping, it goes away. So the host can see all of
the processes that are running. If I exit this container, then all of the processes and things that were associated
with it just go away. If I start the container up again, we see a new one that starts up. From a performance standpoint, what’s this? This is great. Our memory usage when we’re not using the container is
going to be about 2.2 gig. So I just exited this guy. So as soon as that guy exits, we’ll see that dropped
down to about 2.2 gig. If I start a new container, 2.3. So in about a 100 mg, we started a new container. It’s a lot less overhead than we normally see with
virtual machines. So we get some better density
out of this in addition to those deployment advantages
that we talked about. So this is all great and wonderful, but how do I actually manage
this stuff in production? You just showed me remoting into
a Server and running some script, like I’m not going to do that for
600, 700, 1,000 applications. So how do we manage
them in production? Well, we do that through what
we call an orchestrator. So orchestrators take advantage
and take care of things like scheduling our containers for
us, affinity and anti-affinity. So we don’t want to have all of our Web Servers running
on the same node. So that if that node goes down, all the websites down. So it takes advantage and takes
care of those things for us. It can do our monitoring
and it can do our failover. Our scaling, it takes care
of network management. So what ports need to
be associated with what and how do I hook those
up to load balancers? It deals with service discovery. How does the Web Server
talk to the SQL Server? Upgrades, it can do
all the coordination of an upgrade rolling through the various versions. In terms of options there,
we’ve got a couple. We can use Service Fabric. Service Fabric is
our Microsoft Orchestrator for microservices and applications. This run a lot of
our internal services and customers had been taken
advantage for quite a while as well. We’ve taught Service Fabric
how to do containers. Kubernetes, far in away, the
predominant container orchestrator. We’ve got a full Windows Server
support in Kubernetes as of 1/14. Super excited about that. A lot of work that went into that both from Microsoft
and across the ecosystem. Really excited to have full Kubernetes support for
Windows Server containers now. We’ll see a demo of that in
a minute as well as App Service. App Service, for many of you who
have never used Azure before, you might not be familiar with this. It’s so easy to just set up in a WebApp and get it
running in the Cloud. Couple of clicks and you’re done. It’s just a great experience. Then, of course, Docker
Enterprise Edition, who we’ve been partnering
with for a number of years, bringing great container
innovation both to Windows and the Linux, of course. So we lead off with,
we’re going to talk about three ways to modernize our
applications. Well, here they are. One, we’re going to talk
about how we can modernize using our Private Cloud resources. We’re going to talk about
how we can use Azure and Kubernetes in Azure. We’re going to show App Service. So without further ado,
let’s just dig right into these and let’s start
out with Kubernetes. I’m just going to connect to
my Kubernetes control plane. I’m happened to be using Azure. This can be run on-prem. Does not have to run on Azure, you can run it in any Cloud you want. I’m just using Azure because it’s
super easy way to get going. But this is what it looks like. So we’ve got Kubernetes
control plane. We’re running. We’ve
got one deployment. We’ve got five different pods. We’ve got a replica set. I’ll explain what some of
these mean in a minute. Don’t worry about the terms too
much. We can see this deployment. I’ve got a deployment
that I called IS 2019. It’s probably a web server.
These are those five pods. These are five different containers. So all of these
containers are running under this deployment.
They are scale out. So I’ve got five web servers running, replica sets, and then
we’ve got a service. That service is just called
IS and if we open it up, it’s just a web server. Now notice, there’s only one address for this because it’s all being automatically load balanced for me. So one of the cool things
I can do in Kubernetes if I want to scale this up, I can go in here and
I can say scale 10. I’m expecting a lot of traffic there, and it will go and
scale this up for me. Now if I tie this
into some monitoring, I can do what we call
automatic scaling. It’ll automatically scale
that up whenever I need to. Of course, if I wanted to I could
scale this back down as well. All the load balancing
is taken care of for me. I don’t have to go reconfigure
load balancers it all just kind of happens super-easy, amazing kind of experience. All of this is available on-prem
and of course in Azure as well. So I’ve got a preview of our
Azure Kubernetes service, really really excited about this. We now have the ability
in this preview. There’ll be a link here
at the bottom and at the end for how you
can join the preview. We now have the ability to
set up Windows node pools. So we’ve got this new node pools
and preview here, we’ve got Linux nodes, and we’ve got our Windows nodes. On those windows nodes, I’ve got my containers. So I can do really cool things like I could jump into
the insights here, I can get all of the utilization of the whole cluster all
of the nodes within it, I can go to this nodes tab. This is great. We’ve
got our Linux nodes, so we can say
the Linux penguin there. We’ve got our Windows Server nodes. I can pull that out, and I can see all of the containers that are
running on these nodes. So here they are. There’s the containers on it. I can click in, I can get
information about that container, I can go in look at what kind
of configuration it has, what labels, what versions
it’s running, all that information super
easy right at my fingertips. Because it’s Azure
Kubernetes service, all of our Visual Studio
code integration works too. So I can go and I can see
all that same information. If I had logs on this container, I can just right-click “Show logs”. We’d be dumping
all those logs right here. This one doesn’t happen
every logs but if it did we’d see them all right
in that one spot. Super-easy for our developers
or admins who are running PowerShell scripts or doing any sort
of configuration to see this. So it’s available either place. Really really really easy. So that’s our Azure Kubernetes
service in private preview. Go jump in on that it’s great. Now I’m going to show
you the easiest way to get containers up
and running in Azure, and that is App Service. So all we have to do
here is we just go here, “New web”, and we’re going to
say “Web app for containers.” So this is great if
we have a web app. Anything that’s using
just IOS or web server, just give it a name, which subscription I want to use,
or what resource group. Of course we’re in
the Windows Server seven here so Windows obviously. But if I wanted to use Linux
I could do that as well. What plan, and then I get
to configure the container. So here I can choose
what image to use. For this little quick-start
they’ve just got a sample one. Again we’re just MCR.Microsoft.com
App Service samples, ASP.net helloworld,
pick that sucker up hit “Apply” go and away it would go. I’ve gone ahead and configure
one ahead of time here, just Hello in server. So I can just go select the URL here, boom web servers up and running. I can go in and configure
all sorts of stuff here. So I can go into the
container settings, I can choose what image to
use if I was going to do a upgrade or change what it was. Really easy to pull that in
if I wanted to change between Azure Container Registry
and Docker Hub, super-easy. Got all my logs streamed
right at the bottom here. So I can pull look at all those. Just a really really easy way to get started with
containers in Azure. So with that, we’ve just seen
a couple of great ways that we can leverage Azure Kubernetes, on-prem or in the Cloud to start
to modernize our applications. Now for many of you,
you’re just getting started with Windows Server 2016. I just want to show you, look at all the innovation
that we’ve been able to deliver specifically
around containers both in our semi-annual channel, and then all the way
up into Server 2019. So with Server 2016, it was our first innovation
on containers. We launched Windows containers
for the first time. Which was a lot of work and
tons of great things there. Immediately out of that up to
our semi-annual channel, we started delivering new innovation. We optimized the image size. Reduced the image sizes
by significant amounts. A whole bunch of
networking improvements, 1803, six months later again a ton
of new networking innovation, a bunch of additional
container enhancements, better app compat,
more optimizations. Server 2019, six months after that our long-term servicing release. Again, more optimizations
even more container work, all of the foundations
necessary for Kubernetes. This is the version that
Kubernetes 114s picks up. So I really really encourage you to start looking at Server 2019
as soon as you possibly can. With that, we’re still at it; 1903, our next semi-annual channel release, a bunch more great
container innovations. More networking support. We’re now able to take advantage of GPU acceleration with Direct X. So some our game studio companies and partners are really
excited about this ability to leverage that optimized
GPU acceleration for applications running
in their container. We added the container spec
to the PowerShell gallery. We’re working on the next layer
of open source innovation around container D. So a ton of great stuff continuing in
that semi-annual channel. We’re not done, we’re already working on the next semi-annual
channel release. So for those of you out
there still looking at 2016, see if you can start to
turn your attention to 2019 or even better to
that semi-annual channel. A lot of great stuff there. I’ll leave you with this. We’ve got a bunch of great customers who were already using containers. We’d love to add you to this list. So go ahead and give containers
of try today. It’s super-easy. aka.ms/containers, great
walk-through, gets you started, hiding in the dock team
has done a great job of updating and revising that. If you have feedback on
those docks please let us know. Go jump on that private preview of the Azure Kubernetes service,
aka.ms/aks win. By far the easiest way to get
Kubernetes is set up and running. Super-easy, Windows and
Linux together in Azure win. We’ve got SQL Server support for Windows containers
in private preview. Go ahead and jump on that,
aka.ms/windowscontainers/sqlpreview. Fill the form, select
Windows containers, will get you into
that private preview as well. With that, this concludes
our event for today. You get to hear about
all the latest and greatest on Windows Server 2019. For more information, check out the resources icon
just below the screen. That’s going to include links to
download all this information and dig in deeper on Windows
Server and our products. Also please do not forget
to take the survey. There’s a survey icon below. Let us know what you thought of
the event. We love these events. We want to continue to
have great virtual events, and we want to make sure
that they’re great for you. Shortly, we’ll be tallying up
the results of the knowledge tests, and will be reaching out
to all of those winners. Congratulations to
you. Thank you, all. Stay tuned for future
Windows Server events, and enjoy the rest of your day. [MUSIC]

Add a Comment

Your email address will not be published. Required fields are marked *