If you had asked me a couple of years ago if I thought I’d need to build out a home lab environment for Microsoft 365 in 2021 and this time – make it more complicated, I would have laughed. Progressively, since about 2012, the lab environment I’ve needed has shrunk and I’d been using Azure for standing up and then deallocating simple test environments.

Several things have changed over the last few years though that have made me re-think the value of having a lab built out, and might change your mind too. But, I’m getting a little ahead of myself – we’ll come to that in a moment.

Now, it might surprise you if you’ve read my articles and videos that I don’t spend every moment of my free time thinking and doing Microsoft 365; the less I spend on computer hardware for lab purposes, the more I can spend on other things, and I’d assume if the under $500 in the title caught your eye, you feel the same way. So, I wanted to build a lab out – that met all of my criteria – but I didn’t want to spend a lot of money.

The big question – why?

But, let us get back to the fundamental question – why? When I posted on Twitter a few weeks ago asking for opinions about building a home lab, I was expecting some of the answers I received; “why aren’t you building it in Azure or AWS” being the main one. Like other MVPs and folks working at Microsoft Partners, I get a set amount of Azure credits to use for these kind of purposes, and of course I can add a credit card and spend more if I need to. When I’ve built out lab environments in Azure, firstly – they are a pretty straightforward, and secondly, I’ll start up and shut down the environments when I’m finished, meaning I don’t usually go over the included Azure credits.

What I wanted though reflects what I am seeing and need to improve – and maintain – my skills in. The core purpose of building a home lab isn’t to benefit your day job – it is to help you improve your own skills.

HAFNIUM reveals on-premises skills remain crucial

If HAFNIUM taught me anything, it is that some skills, like Exchange patching and management are in danger of becoming stunningly rare due to the success of Exchange Online. This is extremely ironic as every organization that runs Azure AD Connect must also run at least one Exchange Server to remain supported. Keeping a working maintained Hybrid environment is useful partly to keep those skills fresh, but also so that if there are problems encountered, I can share those with you. It would be a sad day if Exchange MVPs (who admittedly are all Office Apps and Services MVPs) joined the ranks of former Exchange admins and were out of practice patching. Now – that can of course be done in Azure – but what about larger on-premises environments – even a small DAG is difficult to replicate in Azure using free credits, and once Azure starts costing money, it becomes expenses for you if you are self-funding running the machines. In reality though, complex, messy Exchange environments are more commonly encountered – whether upgrading to a newer version of Exchange or migrating to Exchange Online. These difficult environments are why some customers haven’t been able to move yet – and a multi-forest, multi-site (well, fake multi-site) environment provides value and a dose of reality, when planning and executing upgrades and migrations.

There are still complex environments out there that need to migrate to Microsoft 365 and technologies you might not know

Exchange isn’t the only reason though to build out a lab environment that mimics aspects of real-life messy environments. Microsoft Teams voice continues to grow, and if you don’t have some skills in areas such as SIP and Direct Routing, then although it isn’t a disaster, you’ll potentially struggle with Microsoft 365 exams – and find a voice migration difficult to understand conceptually, let alone actually perform. Much like Exchange, on-premises environments for Skype for Business have mostly migrated, leaving complex environments behind where being able to lab it out quickly is a a useful skills.

And File Server migrations to OneDrive and SharePoint continue to increase in number. On the job, mid-migration learning is far more difficult than building and testing it for yourself. Azure isn’t terrible for this, but a home lab with a reasonably fast upload has one big advantage – access to often your own files with messy data structures and potential for seeing real-life errors – something hard to replicate in Azure.

I mentioned that the many remaining on-premises environments in organizations today are reasonably complex. One major complexity is that many of these will need to be Hybrid for the long-term. Data will need to live in Exchange, File Shares and Microsoft 365 depending upon the type of data – in a similar fashion, Mailboxes often must be kept on-premises in some cases, too. Maintaining a Hybrid environment that runs 24/7 and includes a little bit of mess – no more than is usually accumulated by other customers is valuable. It means that you can plan and test the end-to-end scenario that underpins the reasons for Hybrid or features that can be used with Hybrid, such as Hybrid Modern Auth, with relatively little risk.

Another common on-premises requirement – not for many customers, but a growing number is AIP Scanner. Classification of data before it lands in the cloud is more common with organizations that have been unable to migrate. Being able to setup and deploy AIP Scanner, and see the results of it’s usage is vital before deployment of it into a production environment.

The list of cross-premises, Hybrid scenarios is a longer list. But focusing on the fact that a home lab is for your skills improvement – there’s also another important aspect. Often, transformation is not just focused on Microsoft 365. There’s on-premises migrations to Azure – re-factoring and rebuilding applications a modern way; and there’s a lot of legacy VMs supporting line of business applications that need to move too. That means, alongside may Microsoft 365 projects, there is also an Azure project and it therefore becomes vital to have a platform to use to learn about the Azure aspects for on-premises migrations, if only to be able to understand better the dependencies on other systems.

For my lab – I’m building (or by the time you read this, built) out not only a larger number of clients – to support policies I deploy, but also to use to improve my Teams demos – but also servers as well. A cross-forest Exchange environment is one key things most people working as Microsoft 365 consultants must understand, therefore having an environment where those “this is not supposed to happen” moments can be either avoided, solutions to them can be tested. Alongside this, I’m also building out VMs to support learning about associated technologies to Azure, but in an environment that works for me.

What are the options for building a lab server?

What you buy for a lab server depends massively on what the environment that will house it is like. If you live in a studio flat, then a rack full kit won’t be useful, but a NUC-style device crammed with RAM will. Most of us have three core options to consider:

  • The NUC build – quiet, out of the way and low power.
  • The self-build workstation running Hyper-V – off the shelf, modern up to date components – takes a little more space but guaranteed to perform well.
  • Buying second hand servers – the noisier options, but not as bad as you might think.

My first option was to look towards a NUC build. I kept core networking equipment under the stairs at home, near to where Internet connectivity arrives in the building. A quiet or silent build running 64GB of RAM might not quite be enough, but you can always add another, right? Right? Costed out though – a NUC build using a current or previous generation laptop CPU of i5/i7 standards (or AMD equivalents) with 64GB RAM and a 1TB SSD came to over $1800. With a second one on the horizon already to get at least 128GB RAM – my approximate minimum for RAM, and to be able to have the workloads running across multiple CPUs – meant that the total expenditure would soon be at $3600. For that kind of cash, I could run complex workloads in Azure (for a while, anyway) and, as I said above, there is other things I could do with the money.

Back in the Autumn, last year, I began down the workstation route – the self-built whitebox PC. Whilst you get more bang for your buck compared to a NUC, the problem of noise remains a constant worry – and the size excludes installation into somewhere in the home that isn’t annoying to someone, unless it’s in the garage. But who wants to put a decent, brand new PC build into their garage? And if you’ve bought a new desktop PC recently, you’ll have seen how fast these machines are compared to Laptop devices. Side by side, my AMD Ryzen 5 3600 build with an Nvidia GTX 1660s, 64GB RAM and 2TB M2 NVMe SSD massively outperformed my brand new work laptop, a Surface Pro 7 with an i7 CPU inside. It might have a lot more RAM, but the raw performance in comparison meant that perhaps that machine was better suited to day-to-day needs. Sadly though a desktop PC is a more difficult option right now due to shortages of components, like graphics cards. The $220 GTX 1660s I picked up nearly a year ago has double in price – if it is even available, much like other graphics cards have, thanks to Bitcoin mining and component shortages, ruling out another build – though even if the price remained the same, it adds up to over $1000.

Finally we enter the wildcard option – get – an actual server, and this is where I’ve done it for around $500 and much less than $1000. Most commentary on the internet is focused on using a home lab server to learn about virtualization technologies, but the scenarios I’ve listed above are just as relevant. And, because the same customers who many readers are moving to the cloud are disposing of kit – usually via legitimate means that then resell the hardware – there is a deluge of available kit at competitive prices.

The conclusion – buy a second hand server. It seems wrong (especially when I was buying the exact models new once upon a time for around 20-30 times the cost they sell for today) but a second hand server, even in the sub $500 range provides more CPU cores and much more RAM than anything new that can be built or bought for less than $2000.

For example – for the new lab I’ve just bought, I purchased, from eBay, a multi-CPU Dell R710 server with no storage, but 128GB of installed RAM and a Dell H200 SAS controller for around $400, including shipping. In the video that you’ll see on our YouTube channel later this week, the most surprising aspect is that the power consumption isn’t noticeably different than running a PC with a lower-end graphics card. Components like redundant power supplies are nice, but in a lab server they don’t need to be plugged in. Adding a decent SSD drive, like the Samsung 870 EVO 1TB drive for around also keeps the power low, but thanks to deduplication with Hyper-V, allows for an almost bottomless pit of lab server storage, as well as speeds. Of course the main challenge is where to put a server in your home. Now that’s a different story – so check out the video for tips including one you probably don’t want to try at home.

About the Author

Steve Goodman

Chief Editor for Audio and Video Content and Technology Writer for Practical 365, focused on Microsoft 365. A nine-time Microsoft MVP, author of several Exchange Server books and regular conference speaker, including at Microsoft conferences including Ignite, TechEd and Future Decoded. Steve has worked with Microsoft technology for over 20 years beginning and has been writing about Exchange and the earliest iterations of Office 365 since its inception. Steve helps customers plan their digital transformation journey and gets hands on with Microsoft Teams, Exchange and Identity projects.

Comments

  1. Rob

    For me recently it was a complex ADFS 2019 deployment, and to test that with Office 365 and On prem Active Directory. And yes our Exchange is still a complex mess too, still in progress of clean up after very large mergers and demergers. Good to get the labs to match non-prod and prod as closely as possible.

    1. Steve Goodman

      One thing I’ve seen folks struggle with is getting a “test” environment that matches the production one closely, as it’s difficult to get one that has the same intricacies or old, hidden mistakes.

      Certainly when I worked on the customer side though, I had less need for a “home lab” – I was quite lucky in that I could use older servers to build this out.

      For me today though, many environments are a complex need much like you describe – so something that can run over time, have multiple versions integrated, allows for me to slowly make changes to give the environment a bit of a history is quite useful.

      But even for simple issues, like the one I posted about last week with Exchange 2016, standing up an environment takes time. As I’d had the Exchange 2010 hybrid environment shut down (it had been on my desktop lab machine), it took hours to get it to a state where it was up to date; and that was before building out Exchange 2016. Running that in Azure would have been OK, but next time I’d rather have it ready and running. Idling along and patching when needed would have saved me hours and hours of time just to check that the issue was reproducible and to get a screenshot to blog about!

  2. Nigel

    Had to smile when I read this because when Azure Stack came out a few years back I predicted that MS would release a version of Office 365 for it allowing enterprises to revert to on-prem. It hasn’t happened yet, but give it time…..

    1. Steve Goodman

      I’ll be interested to see whether a local meeting server ever comes out for Microsoft Teams, to keep traffic local – but for other workloads like Exchange, I don’t think on premises is going away any time soon.

      Certainly my experience is that those environments that are staying (or only part moving) are quite complex – they are indeed the type that would deploy on-premises entirely if the entire stack could be – but I’m doubtful it will happen. Take ATA for example and then Defender for Identity. Maintaining a large on premises deployment to do the same as a cloud tool was complex. But, as you mention, with Azure Stack HCI and similar there is the opportunity for containerised components of Microsoft 365 to one day be deployed anywhere a customer chooses, perhaps with the cloud services managing integration.

      For me the lab is about those complex scenarios – but also learning about moving complex highly integrated legacy technologies.

Leave a Reply