Today i enjoy a day on the vcnrw community Event, managed by Timm and Helge. I am proud that i can say, that Matrix42 is sponsoring this great community with round about 100 people. This community is about virtualization, datacenter and data Management.
Within the following blog entry, i will document the day and the session of room 1.
Session 1: Changing our world in front your eyes
The first session was hold by Douglas A. Brown. He started his session with talking about the future of IT and the two letters „IT“. He said, IT is not about the Technology, it is about the Information. The key Point of his session was about „Big Data“. But wait: He starts to talk about American Pie? What the hell? Lets explain: Americans are loving american pie. But is the most famous pie in america? (Big-)Data was needed to find out, that not apple pie is the most favorite pie, its cherry pie.
Next Topic was „Context“: If you search for „Jaguar“, do you search for the animal or the car? This question can be answered i f you relate this with oter data. If you searched for a jungle trip on amazon, the machine can be relative sure that you searched for the animal. Questions like this are the raise of machine learning.
After he explained „Machine learning“ he switched to the Topic visualization. Visualization is a very important Point, if you think about data-analysis.
But there is also a dark side:
Think about Edward Snowden. Doug said, that 80% of the corporate crime will be „big data related“. EU passed a law to collect and store telocommunication data. In Germany this is called sometimes „Stasi 2.0“. Doug used a citate from Spiderman to explain this in one sentence: „With great power, comes great responsibility“.
After speaking about Big-Data he used the time to talk about the „Internet of Things“. IoT in one sentence is „Machine to machine communication“ or better „Connecting the unconnected with intelligence“.
He said, that IoT is not the big thing for Enterprises, but IDC estimated, that 212 Billion devices will be connceted and cisco will manage the Basic infrastructure (light, heating, …) for 300+ Buildings of every Country. Alos the „personal devices“ are important in future and part of the IoT world. Think about the watches, the glasses and shoes that are connected. Lust but not least the robotics that are coming. Put some „eye-balls“ on it and the will be loved by the humans. He explained that IoT will not be follow the way of more and more Pixel with every new Version of deviced. He shows an umbrella, stuffed with a WLAN and a GPS Adapter. This umbrella will blink red, when i would rain outside. Just one Pixel, but exact with the Information you need. So IoT will simplify Things to make a better personal experience.
Last Topic of Doug was: „Think mobility first“. If you develop a new Interface, first develop it for you mobile device, not for your Desktop. Build your Technology bridges that allows you to extend your legacy apps to mobile apps. One solution could be Technologies like Remote Desktop, Citrix Terminal Services, and so. Where the Clients can zoom a TextBox automatically…..
All in one a great session to show the current way of IT Technology. THanks for the travel about Big data -> context -> IoT Mobile.
Session 2: Peter Kirchner from Microsoft about Hybrid IT
Hybrid IT includes App (Outlook / Outlook Online), Data (OneDrive), Identities and Networks. The question for Peter is, why is Hybrid IT needed. There are several reason. Because some Services (Databases/ ADs) should not be in the Cloud. To be able to be a competition in big Projects, you can use the Cloud instead buying Hardware. You can use the Cloud as backup Location and so on.
Microsoft gives you several solutions to build hybrid solutions for you Networks. You can do it with IPSec VPN, Exchange Providers or (the biggest Scenario) you can build your own fixed line (mostly MPLS) into the Microsoft datacenter.
With Azure Active Directory you can use the Identities in the Cloud to allow Access to Cloud applications and with the AAD Application Proxy you can also allow Access to internal applications from inside.
Session 3: Bob Janssen CTO at RES – Staying relevant through Innovation
Do you fear change, leave IT. That was the sentence, how Bob started his session. Innovation is need to have. One example for Innovation and changes in the IT, is Cloud computing. The Innovation today is controlled by „User expexations“, „Handling complexity“ and the „speed of delivery“.
A modern digital Workspace is controlled by the following facts.
- „Policies & Roles“ which should be managed by a identity warehouse. This should be consumed by a ServiceStore to deliver the Services.
- The „Admins & Users“ where we have to think about Workspace security.
- The „Asset & Services“ where you need to think about Integration, provisioning and so on.
- Lust but least you have to Control the costs.
Session 4: Benny Tritsch -> RemoteApp
Benny told about Azure RemoteApp Cloud Deployment. He explained that it is possible to create the own Images with your own applications and publish them into Microsoft RemoteApp. In the Background the machines will be prepared with Standard Technologies like sysprep.
Very interesting is the use case to take RemoteApp for Demonstration and Evaluation Projects. It was very impressive for me to hear, that currently projects with 40.000 users are running with RemoteApp.
Benny described very well, what for steps are needed to create a VM Template for RemoteApp. He did this described in steps in PPT but also live in his current Environment.
Next he described how to use RemoteApp based applications in a HTML5 based Client. In future it should be also possible with the store based Remote Desktop App.
The Feature set is different, based on the Client you currently use.
After explaining Azure RemoteApp, he explained GPU accelerated Remoting. The idea is to plugin a high end graphic Card into the Server. For this there are several model. You can let emulate the CPU the GPU whats normally not a good idea. The next step is to put the GPUs into the Hypervisor and let a piece of software redirect the GPU access to the GPU (RemoteFX). This is better than using the CPU, but the issue is the Softwarelayer between the VM and the Host. Last bot least you can add a GPU for every machine, which is very cost intensive.
The very interesting fact is, that not only High-End Graphic intensive applications are using DirectX (and with this direct acccess to the GPUs). Also Office and Browsers can use DirectX directly and can get adavantage of this.
Microsoft is using dedicated Graphic Cards in the N-Series of there Azure VMs. Thanks Benny for this interesting Deep Dive into the graphics accelerated world ;-).
Session 5: Thomas Maurer Nano Server
Thomas Maurer von itnetx explained us the new Microsoft Nano Server. The issues in datacenters today are „Too many Reboot“, to „Too large Serverimages / Footprint“ and „that the infrastructure requires too many resources“.
A part of the solution for this issue is to remove all unneeded roles from the Server System. A nano Server is completly headless, also without command prompt, rdp and so on. Nano-Server is following the Zero footprint model. Server Roles and optional Features are outside of Nano-Server. The packages will be installed like applications. The Key roles are Hyper-V, Storage (SoFS) and Clustering. The Drivers are full supported.
For Nano Server MSI is not supported. For Windows Server Microsoft will announce a new installer, which should be used in future to install applications.
Very impressive was to see the difference between the Server Editions, relating to patch, reboots and so on.
After some theoretical Information, Thomas showed us in a live demo the handling of Nano-Server. The packages for Nano-Server are designed as packages in WinPE as cab files. With the PS CmdLet „New-NanoServerImage“ you can create a Nano Server VHD. The CommandLet will use existing Tools (like dism) to create the VHD file.
After creating the Image (in the live demo it has a size of 450 MB) you can add it to a Hyper-V machine. When you start the VM, Nano-Server will be installed (which takes a few seconds) you can log in, into the Nano Server Recovery console. There you can configure the NIC and thats it.
How is done the Management of a Nano-Server:
The Management is done via Powershell, WMI and the typical MMC Snapins for Windows Server Management. In addition to that there is an Azure based Web-Console. In Azure you need a Management Remote Gateway. With this Web-Console you can have access to Powershell, Filesystem, Registriy and so on, directly from Azure.
Last but not least, he talked about Containers:
With Containers you build several operating systems for several applications. You can say, it is virtualization on operating system layer.
In the Microsoft world Hyper-V container and normal Windows Server Container are available. Thomas showed us live via Powershell Remoting in his previously deployed Nano Server, how to deploy a Container to the Nano Server. The interesting fact is, that the Container host can access the container processes (but not vice versa).
Then he converted the container to an Hyper-V Container and showed us that the processes of the Container are not available in the Container host. This is a unique Feature in Hyper-V.
So we saw, that Nano-Server and Containers are working perfect together.
Session 6 – Kevin Goodman: Take you Powershell programming to the next Level
Kevin showed us how to start developing with powershell. Not with Notepad, not with Powershell ISE, the Powershell Extensions for Visual Studio or Visual Studio Code. All the issues you currently have with Powershell ISE is solved with it.
What is now possible wih powershell? Of Course. Everything. Calling Win32 API, COM Objects, .NET Instantiation or any other API is absolutly no Problem. So you can take the full advantage of your Windows Subsystem. In a small live session he shows us, to call a MessageBox as COM instance (have a look at pinvoke.net) and .NET.
After calling some MessageBox’s he showed us how to create a more complex WPF Application. He created the WPF application in VisualStudio, read out the XAML Code with XML-Reader in Powershell and added some functionality to the WPF-Form directly from Powershell.
Then he shows a few examples how th create a Progress bar (with winform) or with an inifinitive Progress (loading circle) with winform.
The next step is to do Tasks asynchronous with Powershell Jobs. Jobs can be used to to run Long running processes in the Background. How put this all together to one? He demonstrated us, how to create a VHD File within Powershell asynchronous, a Windows Dialog and an active Cancel button.
Session 7: Matrix42 MyWorkspace presented by Dirk Eisenberg
Dirk opens the session with a short presentation about Matrix42. He shows us that the SaaS applications are growing more and more. And what is the problem with SaaS applications: Onboarding, Offboarding, User-/Assignment Management, „Access from everywhere“ are things that must be managed.
In addition to this Management-Tasks you have several URLs and several Passwords you have to know. And not only SaaS applications, also multiple directories, several internal applications will all have their own password.
Matrix42 MyWorkspace is a Bridge between your SaaS applications. You connect your AD with a Cloud connector to MyWorkspace and your SaaS Applications from the other side. MyWorkspace will do the rest of the work for you. Provisioning of the Useraccount in your SaaS Application, deprovisioning of your User in your SaaS Application, assigning a user to an application and so on.
At least Dirk shows us, how to logon on SAP Business One integrated with your AD Account via MyWorkspace.
Session 8: Nils Kaczenski told us something about Hyper-V // Myths and Truths.
Nils played Mythbuster for VCNRW to explain Hyper-V.
Myth 1: Is Hyper-V usable for production? Hey, Hyper-V is used for Azure = 1 Mio. Devices.
Myth 2: Hyper-V is not bare metal: No that’s a myth. Hypervisor is a real bare metal System. It is communicating directly with the Hardware.
Myth 3: Hyper-V VMs are not completly isolated. Is any Hypervisor isolated? The Driver layer that is integrated in any Hypervisor can be used to execute Code in any VM.
Myth 4: Hyper-V has not the performance than other Hypervisors. All 3 big vendors (vSphere, Hyper-V, KVM) are working on a very high level. The last real Benchmark was 2009. The reason why no new Benchmark was made is the real high effort for thos Benchmarks.
Myth 5: Hyper-V is easy. No it isn’t. Hyper-V is really complex, specially the Network configuration. If you want to run a Hyper-V Cluster with a Shared Disk, NIC Team, and Bandwith Control, you need at minimum 6 different consoles.
Myth 6: Hyper-V needs SCVMM. With Hyper-V you can operate VMs, Backup VMs with VSS, Export-Import VMs, do Live Mgration, do Storage Live Migration, configure High availabality, Replication and Network virtualisation. For non of these Actions you need SCVMM. As alternative you can use the 5nine Manager.
Myth 7: Hyper-V needs Active Directory. No you just need Active Directory for „Shared-Nothing Live Migration“.
Myth 8: NIC Teaming gives you more bandwith. No teaming is just for Failover. Not to have more Performance.
Myth 9: Live Migration Needs Maximum Performance. In Windows Server 2008 R2 you needed a Cluster and SAN and just one Migration was possible. With Windows Server 2012 it was possible to migrate more systems and you didn’t need SAN Access. With Server 2012 R2 MS added compression and RDMA (SMB Direct).
Myth 10: Hyper-V should be used as „Core“.
Myth 11: SMB 3 is faster than SAN. Possibly, it should be interesting. But is it easier to perate a complex NAS System than a SAN?
Myth 12: Dynamic vhdx is better than static. It depends on, what you compare. Dynamic disks are optimized for several small Topics. Copying a file just with Zeros will result in a pointer and will be done in less than a second.
Session 9: Jeroen van de Kamp: Your ultimate Windows 10/VDI Tunig Guide
Jeroen tries to Show us, how to tune Windows 10 in 45 minutes (the time of the session). Woohooo, cool.
He explained what the Tool LoginVSI does to Benchmark a VDI System. With this tool they discovery, that with Office 2013 you can took 20% less users on a Terminal Server than with Office 2010.
After this test they done another test to compare Windows 7 to Windows 10 without any Tuning. They saw, that Windows 7 & Windows 10 has the almost the same processor usage and Little more Harddisk usage. The command execution was nearly the same. But the Overall Response time was a little bit lower in Windows 10.
Then they do the same with Tuning:
The CPU usage was a Little bit higher in Windows 7. But the difference was extreme dramatic in the commands and the harddisk writes. But all together the difference between these both was, there is no big change in the performance.
Äh what??? Whats now?
After that they tested VMware OS optimization tool. With this the Performance raised up for 40%. Wow!!! But this also optimized Windows by disabling Full-Text search. But do you really want to live without full-text search?
So they started to test the Performance category-by-category (the categories in the VMware os optimization tool).
They found out, that the most Impact on Performance is done by the following Services:
What is superfetch: Superfetch will compress your Memory. If harddisk is a critical Thing in your Environment, please leave it on. Because: through the compression you reduce the file System Access.
What they also discovered:
- OneDrive is using 2-4% of your Performance, even when OneDrive isn’t used.
- Ist also important, that you remove ActiveSetup from your logon process.
- Remove Apps, because they use your Internet Connection.
After that, they tested the behavior with overcommitted Memory. It brings some Performance to the virtual machines. But there is Problem with overcommittment: If one user is using this very dramatically you will get a „traffic jam“ on your hypervisor. Please be careful with this.
Last but not least they compared 2013 with 2016. In first step there where no differences between these both. But in remoting, with RDP it costs 10% more Performance. Without RDP (Lets say PCoIP) it cost much more.
Conclusion: Windows 10: Big Impact on Memory, big Impact on storage, CPU is main bottleneck.