What Is the Meaning of “Lift and Shift”?

Picture this:

You are working with a team of people tasked with migrating the supply chain servers and related applications, databases, websites, and more from the datacenter in Atlanta, Georgia, to a Microsoft Azure tenet. There are ten servers in total, and all were built in 2020. They each:

Run Windows Server 2019 Datacenter
Are Dell PowerEdge R820 16-Bay Servers containing four 2.20Ghz E5-4607 Six Core Processors and a total of 64 GB of Memory
Have a local RAID-5 configuration of two sets, each of 3 drives per set of 500MB SSDs for a total of 2 logical drives per server, each having 1TB of logical space

It is determined that we will use the Azure Migrate Tool to move each server to a suitable Azure VM in the new tenet named ‘Supply_Chain_202202.’

The latest information pertaining to this migration is that it will be a ‘lift and shift’ for all ten servers.

So, the grand question is:
What is the meaning of ‘lift and shift’?

Let’s assume for the sake of clarification that you have a portable fireproof safety box (like a SentrySafe 1200 Fireproof Box) which contains twenty 100-USD bills and ten 500-EUR bills. You have to move all the bills (USD and EUR) to a storage facility owned by a national conglomerate.

In the example above, taking the locked SentrySafe 1200 to the storage facility and locking the box with the bills inside the storage facility is ‘lift and shifting’ the dollar bills. The other main methodology is ‘re-homing,’ which is taking the dollar bills out of the SentrySafe 1200 and placing the bills by themselves directly into the storage space.

Essentially, ‘lift and shift’ is moving the server ‘in one whole piece’ to the new location (in this case, Microsoft Azure). In this instance, it can be done using the Azure Migrate Tool process.

To clarify, ‘lift and shift’ moves the complete entity as one piece, while ‘re-homing’ refers to creating a new entity in the new location and just shifting the data and configurations.

What Information Should an Application Portfolio Contain?

Imagine you are part of a team.

The task is to migrate the cold datacenter to a Microsoft Azure subscription. You have some information already. It has been determined that 10 total servers will be moved. The servers are all currently shut down (hence, the term ‘cold datacenter’).

The team’s current task is to create an application portfolio. Currently, some team members have a single question:

“What is an application portfolio, and what should it have in it?”

An application portfolio is a dataset of information about all the application ‘in scope’ (in play, so to speak). Essentially, in this case, the applications in scope are the apps that will migrate to Azure.

Now, the bigger question:
What things should an application portfolio contain?

An application portfolio can contain anything relevant for each in-scope application slated for the target migration. However, in my experience, the following information should (at a minimum) be gathered for each application:

  1. The principal professionals involved with both the migration of applications and this application in particular
    They may include Business Analyst, Project Manager, IT Consultants, Infrastructure Management, Database Manager, Network Administrator, Migration Lead and Migration Secondary OnCall, CyberSecurity Engineer, Virtualization Engineer, and more.
  2. A list of the known POTENTIAL risks that can present themselves
    These application portfolios will mature and evolve over time, and one of the goals is to resolve all known risks and assumptions.
  3. Specific information about the application
    ⦁ Is there a support contract for the application, an expiration for the contract, and a due date for it to be renewed (at a discount to the purchase price)?
    ⦁ How many users are licensed for the application and license key IDs?
    ⦁ Who is the current SME (Subject Matter Expert) for this application?
    ⦁ Is the application fully contained in a company-owned datacenter, or is it partly or fully cloud-managed or from the vendor in a SaaS design? SaaS means Software as a Service — basically, you rent the access from the vendor’s datacenter via web access.
    ⦁ What is the current version of the software available from the vendor and the current version in scope for the migration?
    ⦁ Is there a list of the names (usually NETBIOS or Fully-Qualified-Domain-Names) of all the servers related to the application, the IP Addresses for all the servers previously named, as well as any/all related database warehouse server names/IP Addresses and any access accounts used by procedures in conjunction with the application?
    ⦁ Do you have any and all ports used by the application? Any cloud-related URLs and ports?
    ⦁ Do you know the application vendor’s name, website, and addresses, both physical and email?
  4. The accounts needed to run the application, such as:
    All membership groups for users that utilize the application, any shares needed to be set up for the application to work (and what they are currently called and where to locate them), and other applications needed to help this application operate properly.
  5. Links to documentation
    These links should support a better understanding of how the application is installed, configured, maintained, and constructed.
  6. The agreed-upon plan of the steps that will take place
    This is done to properly migrate the application and related assets.
  7. The timetable of when the migration will start and complete
    A Gantt chart is a plus here.
  8. An idea of how the migration will be tested for post-migration success
    Devise a plan on what specifics need to be tested before the application goes live and how you’ll go about performing the test.
  9. Contingencies that must be resolved
    Specifically, focus on the circumstances you must sort out before the migration can begin, such as upgrading the application to the current state, upgrading the datacenter infrastructure in preparation for migration activities, etc.

Answering as many of the questions above as you can help solidify an application portfolio with the substance to support a successful migration

What Is Meant by ‘New Builds’ When Discussing Cloud Migration?

Let’s assume we plan to move two applications from the local datacenter into a Microsoft Azure subscription. We will call the applications “LiftAndShift” and “NewBuild.” For the purpose of simplicity, let’s assume each application is hosted on one server: “LiftAndShift1” and “NewBuild1”.

First, we create a space for these servers and applications to live on.

Since they share data and talk to each other via shared folders on each server, we decided to create a single tenet that will ‘house’ both servers above.

Next, we meet with the application portfolio team, stakeholders, and power users.

This meeting happens so that we can agree on a sequence of events for moving these two applications. This agreement is CRITICAL as both servers need each other, and we must minimize the risk of downtime while this migration takes place. Furthermore, we must test as many things as we can as we go through this process.

We decide to use the Azure Migrate Tool to move “LiftAndShift1” first. This server is currently a virtual machine hosted on a VMware EXSi cluster of hosts running ESXi 6.5 Update 3 (build 13932383). We then download the Azure Migrate Tool from the Microsoft Azure tenet we created.

Next, it is installed as an appliance (*.ova file) into vSphere. Finally, it is configured with an Admin-level account for both SQL on-prem and the Windows Active Directory (SPECIFICALLY USED JUST FOR THIS PURPOSE — AS DIRECTED BY LEADERSHIP, NAMELY THE CISO).

A cutover weekend plan is established.

The prior weekend, we ran an assessment for “LiftAndShift1” using that functionality in the Azure Migrate section of the Microsoft Azure portal. Since this application is very ‘lean’ (small), the VMware virtual server on which the application ‘sits’ is also quite small.

The Azure Migrate Tool successfully completes the initial assessment and recommends two drives and a B2s target to migrate this virtual server directly into the Azure Tenet.

The cutover of “LiftAndShift1” is a success, and the afterward testing completes with no major concerns.

In compliance with the plan created above, the “NewBuild1” server will not be migrated. Instead, we will move the server via a ‘new build’ process.

Now we commence with a ‘new build’ migration.

What does this mean? Simply stated, a ‘new build’ migration is when you first create a new server in the cloud with more than enough resources to run the application, data, etc.

Next, you install the most current version of the software the server will run on. There is one pre-requisite, though; you need to engage the vendor to assure you have access to the most current software. You’ll also need to get the support contracts and proper license structures for it.

Finally, you set up another cutover weekend where all the data is copied to the new location, and the new server is configured to work with the new data copy. It then needs to be tested by the power users to assure functionality.

So, when the expression ‘new build’ is used in the context of cloud migration (e.g., migrating a server to Microsoft Azure), it refers to creating a new server so house data will be updated and then copied to that new server. The base server (operating system, etc.) will NOT BE MIGRATED using tools like Azure Migrate Tool and the HCX appliances.

Why Are Firewalls So Important to a Cloud Migration?

What is the essence of a cloud migration? What major function does cloud migration provide?
Simply stated, the general purpose of a cloud migration is to move resources in the datacenter to a cloud provider (such as Microsoft Azure cloud). These resources can include, but are not limited to:

• general-purpose servers
• SAN/NAS
• routers
• switches
• circuits
• databases/data warehouses
• applications
• file shares/file servers
• client computers (using technologies such as Azure VDI or Windows 365)
• email and productivity software access (using technologies such as M365 [formerly Office 365])

And so much more.

Recently, I discussed two primary reasons companies are moving to the cloud. Please view my previous post on why companies migrate to Azure if you would like that information about the process.
Now, let’s look at the total migration objectively.

We are taking both data and data processing structures from our SECURE data centers that have gained our trust over the last years (even decades, at some Enterprises), and we are moving them to a new location. Even if this location was a vault in the FBI, there would be an element of concern about the overall effectiveness of the new location’s security process.

This security concern is one of the most important challenges to overcome with any Azure cloud migration. Specifically, the client or company’s concern that even with a super-secure company like Microsoft, the design of the new environment — or more specifically, the process used to migrate and position the resources — will not be as secure as what is already in the current ‘legacy’ datacenter.
This is where the firewall comes into play.

The firewall is key and very important to the migration process to help reduce concerns like this, both logically and practically. In short, firewalls are resources that function as guards at the gate; they either allow data to pass along or reject it.

Typically, a Network Engineer will program a process/algorithm that will instruct the firewall what data to accept. The standard practice in Network Engineering is to list everything that will be accepted. The last step is to essentially ‘deny anything that does not fit what I have already allowed.’ In Network Engineering lingo, this is called the ‘deny all’ statement.

The usual configurations for a firewall include a name or label for each rule, the source IP address, the destination IP address, the ports that should be allowed, and the protocols that should be allowed. I have added an example below this statement:

Name: NEW_RDP_PORTS_CR19521958
Protocol: TCP
Source Addresses: 200.152.16.9/20
Destination IP Adresses: 159.172.52.59/17
Destination Ports: 81052

Do you notice the part of the name that’s written as “CR19521958” in the above example? It is added to define the Change Management request that approved placing this new rule into the infrastructure.

Now that we have all of that out of the way, let’s quickly answer the question at hand:

Why are firewalls so important to a cloud migration?

The simple answer is that they are a key line of defense against data hacks — infrastructure security.

Basically, a firewall (or many of them) is the first device that all data is filtered through as soon as it is out of the WAN cloud (think internet traffic; coming and going). This super-specific filtering process adds major security to any environment — and that makes your Cyber Security team VERY HAPPY!

…and remember: ALWAYS KEEP YOUR CYBER SECURITY TEAM HAPPY – ALWAYS!

What is meant by “Automation”?

One of the goals of utilizing Information Technology tools and resources is to build a process. The process is a step-by-step plan that can take you from a pre-planned state to a predetermined result. You can execute this process multiple times and usually get similar results.

Having a plan makes getting results easier. You do not have to expend time and energy remembering what worked and didn’t work, trying to replicate the results. The process gives the owner peace of mind regarding execution and the inner knowledge that they know what to do, and it will probably work as written.

Once the technology professional has a PROVEN plan/process, the next stage is to determine the tools and resources that can help reduce the amount of human interaction required to execute the tasks. Once those tools are selected and configured for the appropriate steps, the process is then verified, and more tools are found until you have the process running without human interaction as much as possible.

Automation is the practice of taking a specific process (of steps/stages) and finding tools and resources to complete steps in that process without human interaction.

A process is fully automated when no major human interaction is required to complete it fully. This is the ultimate goal of many technologists — to create the process, then fully automate it.

In short, automation is the process of removing manual/human interaction from the completion of a process with expected starting points and end results.

What Is Meant By “Control of the Tenet”?

One of the more commonly-discussed ideas in the migration space is “control of the tenet.” However, there is not a lot of discussion about this important aspect of Microsoft Azure migration in courses. Let’s fix that deficiency by discussing what control of the tenet means in-depth.

First, a tenet is an instance of Azure AD combined with the resources that utilize this specific Azure Active Directory instance. For each tenet in the Microsoft Azure cloud, there exists a Microsoft Azure Active Directory instance that is specifically allocated to it. All the resources (virtual machines, network security groups, M365 [Office 365], etc.) that are related to that Azure AD instance are also built as resources (members) of the tenet.

Now that we understand what a tenet is, we can quickly discuss what is meant by control of the tenet. Let me start with a short story.

Imagine you decided to learn more about Microsoft Azure. You have a credit card and sign-up for the free tier (a small subset of all the available resources in Azure that you can use for free). You name the subscription. Furthermore, you set up billing so that when the monthly bill reaches $20 USD (by accident), all the resources are turned off for the month. You are quite the cost-conscious person!

You want to share your work with three fellow IT technicians who are also learning more in Azure. You have their email addresses and full names, so you create three new Azure Active Directory guest accounts.

The next question is:
How will rights in this tenet be assigned?

You need to have ‘owner’ and ‘co-contributor’ user Role-Based Access Controls (RBAC) set up and add each account to the subscription. How will you set this up?

When we discuss control of the tenet, we refer to the person who will have owner user RBAC rights and Global Administrator resource rights in the Azure AD. In this instance, you decide that you alone will have control of the tenet. All of the other technicians will each have ‘co-contributor’ user RBAC with all the subscriptions added to their accounts with the Azure AD resource role set to ‘Global Reader.’

In this way, your colleagues can see all the data yet be unable to change it. Only you will have the right to change things across the board.

So, simply stated, control of the tenet refers to the person/people who can make any changes in the Azure tenet they desire and get the changes to save and be implemented.

This question will be important for any migrations to Microsoft Azure you run into. Who will have control of the tenet? Will it be the CISO (Cyber Security Chief Officer), CIO (Chief in Information Technology department), the Cloud Administration team or the IT Support team? Or will it perhaps be the IT Management or even a third-party MSP (Managed Services Provider)?

The answer is contingent on the perspectives of all the stakeholders.

What is Active Directory?

What does the typical office workday look like in the 21st century?

You wake-up. You get a shower and get cleaned up (brush teeth, brush hair, etc.). You select the clothes for the day. You grab a snack or small breakfast. You then lock the home/apartment for the day, and start the car and drive to the office. You park your car. You walk to your desk, saying good morning to a few co-workers as you get to your office. You sit down and then log into the computer and open your email client (Microsoft Outlook, Lotus Notes, etc.). While your emails and calendar update, you log into your work phone and write down your voicemails to facilitate calling people back during the day.

Does this sound familiar? It does? Good, then we can work from here.

Let’s look at what you do when you sit at the desk. You logged into the computer at your desk. You typed in a username and password combination only known to you. This was given to you from the I.T. department or Human Resources when you joined the company, and you have been regularly updating the password per I.T. Security policy and guidelines.

This username and password allow you to log into company computers and get similar access to resources, regardless of the machine used or the time at which you use it. The username and password are stored on a set of servers; each username has assigned to it specific access and usage abilities that have been approved by both I.T. and your departmental supervision and management.

This username has been stored on servers. If your company has a Microsoft Windows or Microsoft Azure infrastructure, the servers that store this information for the entire organization are Active Directory servers (note: if your company has a Linux or Unix infrastructure, the servers are LDAP rather than Active Directory; but the logic is similar).

Active Directory, simply stated, is a Microsoft product that uses accounts (called objects) to control (give or revoke) permissions to other objects, groups of objects, and network resources.

For each user that logs into a Microsoft Windows account, there exists on the company network (called the domain) an object that exists in the company Active Directory (domain). When the correct username and password are selected for the domain, you are granted access to network (domain) resources based on how the object is constructed.

Objects in Active Directory are available for most users, printers, network groups, and so much more.

So, in short, Microsoft Active Directory is an organized hierarchy of objects that control access to resources.

What is an Azure Resource Group?

Imagine that you are working in Microsoft Azure. You plan to use one Windows Server 2019 virtual machine, two Windows 11 virtual desktops to connect to it, and the network infrastructure to support open communication between all computers.


You also will deploy Azure Files with Server Message Block (SMB) support. You will use Azure AD services for authentication purposes and to log in to the computers. Further, you will have a firewall deployed with only ports 22, 80, 123, 443 and 3389 open on both the incoming and outgoing rules. The IP segment will be 192.168.0.0 with a subnet address of 255.255.255.0.


Everything needs to be built in the East US geographic location when applicable. In the future, plans to have geographic replication to a West US geographic location will be discussed.


Now, you get to the business of building this design in the Azure Portal. As you are working on this build, you start thinking about making this deployment organized and ‘neat’ in the portal. Soon, one question rises to your mind:


Should I use resource groups to segment this further and make it more organized?


You now start researching more on resource groups and how they are set up and utilized, and you discover that resource groups are logical groups of Azure resources. Some of the items in these groups can include:


Virtual machines
Virtual routers
Virtual firewalls
Virtual Desktop Instances (VDI)
Storage Accounts
Virtual Networks
Databases
Web Apps

And much more!


Furthermore, you discover the most common way to divide resources is production, development, and test.


Now that you know what an Azure Resource Group is, you can put all the resources into one resource group called Production-EastUS. This will keep everything in one logical group and help in the future as plans for the West US replication site are investigated and then implemented.


So, what is an Azure Resource Group? Simply stated, it is a logical group of Azure items deployed to a geographic location.

Why Do Companies Migrate to Azure?

In modern business, one of the areas expanding exponentially is Microsoft Azure’s cloud computing. More and more institutions, as well as individuals, are moving their computer-related tasks to Azure. This is part of the cloud computing age, which is going to grow more and more in the coming years.

Now, this raises a question: Why do companies migrate to Microsoft Azure?

There are many answers to this question. However, I will focus on two major reasons why companies migrate to Microsoft Azure: Reducing costs and increasing performance.

REDUCING COSTS

If I could pick one driver for migrating to Azure, it would be reducing costs. Remember, the cloud (including Azure, AWS, GCP, and more) is just a set of large datacenters that you rent to host your Information Technology tools. You pay a recurring cost to have the luxury of using another datacenter to run your tools.

With Azure cloud usage, you can reduce the overall Information Technology costs for some of the following reasons:

  1. No need to purchase and warranty servers
  2. No need to purchase and warranty routers and switches
  3. No need to purchase and warranty network area storage devices
  4. No need to purchase and warranty storage area network devices
  5. A cost reduction as you do not need to purchase and insure a building for a datacenter
  6. A cost reduction as you do not need to purchase and maintain the network connectivity for the building
  7. A cost reduction as you do not need to pay for the electricity to the building

And MUCH MORE…

These costs are given to Microsoft (if you are using Azure cloud), and the overall costs are then divided into hourly/computer-usage units, so you are only charged for what you use. Most businesses only use a small fraction of the total computer power available to them, so the costs are a fraction of what the current spending is.

INCREASING PERFORMANCE

One of the largest advantages that Microsoft Azure presents is its ability to increase performance. Microsoft is continually building more servers across the United States and the world at large.
As these new datacenters are constructed, the latest and greatest physical devices and networking are used to provide users with the best experience in Azure. Additionally, new tools are continuously being made available in the various portals for Azure, which increase the options for performance and optimizing execution.

With Azure cloud usage, you can increase the overall performance of your Information Technology infrastructure for some of the following reasons:

  1. You can increase application compute resources within seconds
  2. You can increase application network resources within seconds
  3. You can increase application storage resources within seconds
  4. You can increase application database resources within seconds
  5. You can increase application security resources within seconds
  6. You can link multiple copies of an application infrastructure (redundancy) for near 100% availability
  7. The supporting platform in Microsoft Azure will have the latest updates, improving performance and stability

And MUCH MORE…

For so many reasons like the ones above, it is easy to see why companies are eager to move more tools to the cloud — YOU ARE GETTING MORE PERFORMANCE FOR LESS COST.

What Does “Migrate to Azure” Mean?

A large amount of money made in Information Technology comes from business (aka B2B) markets and consumer (aka B2C) markets. Additionally, an emerging market is individuals building Information Technology based tools for other consumers (aka C2C).

A significant portion of this market is the devices that these tools operate on/from (aka hardware). These can include physical servers, storage area networks, routers, switches, network area storage, firewalls, and much more. Keep in mind — these devices are primarily located in datacenters or ‘network closets.’

As the Azure computing generation continues to move forward and expand in the marketplace, Azure cloud computing costs continually reduce. The cost reductions increasingly spawn more opportunities for more businesses to afford to build profits from running in Azure.

This presents a problem: How can Azure be used when these Information Technology solutions are running in datacenters?

The answer is simple: MIGRATE TO AZURE!

When a datacenter or network closet is migrated to Azure, it is a similar structure to how the datacenter is currently constructed. Using virtualized devices (i.e., software that does the same functions as the relative physical devices), you are able to recreate the current datacenter in Azure using some of the many tools that Azure offers.

The next stage is to copy the applications and data currently running in the datacenter to Azure. As the data and applications that are shared are moved, application experts are on standby to properly reconfigure and later test these applications.

At this time, a group of ‘power users’ (i.e., clients who use the software and have a deep understanding of how it should work and operate) are engaged to use the software.

Finally, all the customers who use these applications are told to use the Azure cloud implementation; and shortly afterward, the old datacenter’s copy of the software is backed-up and then the old instance is deleted (called “retired”).

This process is known as migrating to Azure cloud … full of opportunity and increasingly in demand in the marketplace.