What servers should i virtualize




















Server virtualization is the process of restructuring a single server into multiple small, isolated virtual servers. This process does not require new or additional servers; instead, virtualization software or hardware takes the existing server and partitions it into several isolated virtual servers.

Each of these servers is capable of running independently. Servers are the technology that hosts files and applications, providing functionality for other programs.

This device processes requests and delivers data to other computers in a local area network LAN or wide area network WAN. Servers are often very powerful, processing complex tasks with ease. A single server can only use one operating system OS and is usually dedicated to a single application or task. This is because most applications do not function effectively together on a single server.

However, when a server is virtualized it is transformed into multiple virtual servers which can each run different operating systems and applications in an isolated environment. The server has been working quite well and haven't received any complains about speed. I'm debating if I should keep a single physical server as it is now but with Server or if I should install Hyper-V and create two virtual servers, a DC and a SQL server with the shared files.

Do you guys see any benefit on virtualizing? Would backing up be easier this way? I thought of using Windows Backup running it from the host. I wouldn't use Windows Backup. There are better options like free version of Veeam and free version of Unitrends. My feeling is you should virtualize unless there is a valid reason not to.

Even something as simple as snapshotting before applying patches makes virtualization worth it. Virtualize it all, however it is advised to be careful when you virtualize the primary domain controller.

Take a look at the following link; lots of discussion on the internet about this. As the others said, virtualize everything. On another point, don't use RAID5 with hard drives - rebuilds can fail with large drives. For a virtual server host, RAID10 would be the best performing for hard disks while having excellent redundancy.

They don't have the rebuild failure risk that hard drives do. Thanks, I'm familiar with the free Veeam but as far as I know you can only run it manually, not on a schedule. To jump onto the "virtualize it" train, I would agree with what has been said above. Some of the benefits of virtualization are less overhead, you can deploy and reploy faster, quick and easy backups, and improving your test environment.

Just a lot more pros than cons in my opinion. Thanks all. Sounds like virtualizing is definitely the way to go. Do you have any suggestions for automatic scheduled backups, hopefully supporting encryption? Why stop at server ? Why not just go ahead to ? In the long run it would be a better value because it is right at the start of its life cycle. Microsoft has done a fairly good job with it. I am running it in my home lab. I agree with the Virtualization consensus of this group though, mainly because it will allow you to use your server more fully.

Not only could you have a domain controller that is separate but you can add other servers to the mix. I would suggest that you have more than one physical server though. Virtualization should be done at a data center level and not targeted at specific physical servers, especially if people do not practice running servers correctly Even if you only have the capital for one physical host much like you do now , I'd still virtualise your servers on it.

As you say, backups are far easier when your servers are simply files and it would better than your current setup. Always look at backup solutions from a recovery point You will then not be able to recover objects in the VM other than files using Veeam but there is an alternative to restore the entire VM. Do note the warning of not restoring Domain Controllers unless you want to have months of troubleshooting. You do not need to backup DCs, just DC objects Barker: Our clients range in size from small businesses with a few employees up to large enterprises with over 1, employees.

The overall client demographic is a mixture of colocation, public cloud, and private managed clouds. While colocation represents the largest share of our business within the context of virtualization, the majority of the smaller clients reside on the public cloud platform that we operate, while the larger enterprises tend to go for private managed cloud platforms based around [Microsoft] Hyper-V or [Dell Technologies] VMware.

Barker: The biggest challenge in virtualization is still the sharing of resources across your infrastructure and applications. Whichever way you look at it some things will need to be prioritized over others within the infrastructure. When designing a virtualized platform it is a balancing act between the competing resources, and most likely you will still have bottlenecks but hopefully moved those bottlenecks to where they have the least impact on your applications.

You would need to consider the network provision, both for external WAN traffic as well as storage traffic. If you are consolidating from physical machines each with 1Gb network interface that are fairly heavily utilized down to 10 hypervisor nodes, it is likely you will need to bump the network to at least 10 Gb in order to cope with the condensed traffic of those systems running on a reduced number of NICs.

You can't always expect to pick the existing network up and drop it into a newly virtualized environment. Similar issues exist with the storage. Most virtualized deployments still provision a central storage array, and this is quite often the bottleneck for virtualized system deployment.

Rittwage: We still run into hardware security dongles that need to be attached to USB, and sometimes they will not "poke through" the virtualization layer into the VM guest. We also still occasionally run into a software vendor that doesn't "support" virtualization and then won't help with the product support, but that is more rare now. SEE: Cloud v. Koblentz: What are the solutions to address those challenges when you're planning a virtualization project? Barker: While there are technical solutions that can help to alleviate some of these issues, such as SSD caching within the storage array or moving to a clustered storage platform, they do have their own drawbacks which would need to be considered when looking at them to mitigate the challenges.

One of the best ways to mitigate the issues is through detailed benchmarking of the current physical servers and planning on how you are going to virtualize the infrastructure. By having this information early on, you can make decisions on hardware procurement that will at least deliver the current performance and hopefully improve performance through newer chipsets, better memory, etc.

It also pays to ensure that you have properly mapped out failure scenarios within the virtualized environment and that there are spare hypervisor resources available to support at least the failure of a physical hypervisor node so that the virtual machines running have resources to migrate into without overly impacting the performance of virtual machines and applications already running on those nodes.

Rittwage: Usually there is an alternate licensing solution available other than hardware keys, but you have to know about it before the migration. There is also software to virtualize USB devices. Barker: The usual things that go wrong when deploying virtualization could be summed up as follows:. Improper balancing of node resources. In a virtualized environment RAM isn't shared between virtual machines, and you are likely to run out of memory way before you run out of CPU which can usually be oversubscribed more than originally planned, but a good rule of thumb is with 1 physical core to 4 virtual cores.

In terms of using a colocation server, power consumption continues to get more efficient with newer servers. The advantage is that this enables customers to save money as power is typically whats costs the most in a colo-type environment. Regardless of the option you choose, fully understanding your business requirements is critical. Certain applications may require more dedicated server resources, due to a diminished tolerance for risk in performance.

In other cases, when it comes to assets that are rarely used, you may be fine to sacrifice performance and speed for cost savings. Regardless of which route you go, understanding your need for performance is crucial to having the best experience with dedicated equipment or a virtualization vendor.

Direct connections into cloud services has made utilizing a cloud service such as AWS and Azure easier. Getting GigE and 10 GigE circuits using Atlantech Online Cloud Connect or similar services makes it more functional than having to rely on connections over the public Internet.

Since virtualization servers are located offsite, you have an immediate advantage in terms of disaster recovery. In many cases, vendors with appropriate risk-mitigation planning can significantly improve your business continuity planning.

Risk mitigation ultimately depends on the configuration of your dedicated or virtual servers. In many cases, companies are able to significantly mitigate risk by switching to virtualization vendors that offer appropriate safeguards against hardware failure and backups both on and offsite.

The security of your physical or virtual servers depends largely on configuration, staff knowledge, and environment. For many organizations with minimal budget or hardware, switching to virtualization can offer significant gains in security protection.

As your data assets increase, maintaining appropriate temperature and humidity can become more challenging. Does your staff have the knowledge and bandwidth to appropriately manage server acquisition, maintenance, configuration, and security?

Perhaps more important, are they aware of best practices for increasing efficiency and realizing cost savings? Switching to virtualization can free your IT team from dealing with data storage and server management, allowing them to focus on other priorities and opportunities for cost savings.

Many organizations choose to slowly migrate their workloads to virtualization over time. If this is your intent, communicate with your vendor about their existing migration tools, and have a conversation about application compatibility. Most businesses find that migration to virtualization, even when performed slowly over time, is much easier than they think. You may have certain data assets that do not contain payment, health, or other types of information that are subject to regulatory requirements.



0コメント

  • 1000 / 1000